Search Results: "willi"

28 February 2023

Utkarsh Gupta: FOSS Activites in February 2023

Here s my (forty-first) monthly but brief update about the activities I ve done in the F/L/OSS world.

Debian
This was my 50th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ There s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:

Uploads

Others
  • Looked up some Release team documentation.
  • Sponsored php-font-lib and php-dompdf-svg-lib for William.
  • Granted DM rights for php-dompdf.
  • Mentoring for newcomers.
  • Reviewed micro bits for Nilesh, new uploads and changes.
  • Ruby sprints.
  • Bug work (on BTS and #debian-ruby) for rails and redmine.
  • Moderation of -project mailing list.
A huge thanks to Freexian for sponsoring my Debian work and Entrouvert for sponsoring the Redmine backports. :D

Ubuntu
This was my 25th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there s a bunch of things I do! \o/ I mostly worked on different things, I guess. I was too lazy to maintain a list of things I worked on so there s no concrete list atm. Maybe I ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support). This was my forty-first month as a Debian LTS and thirty-second month as a Debian ELTS paid contributor.
I worked for 24.25 hours for LTS and 28.50 hours for ELTS.

LTS CVE Fixes and Announcements:
  • Fixed CVE-2022-47016 for tmux and uploaded to buster via 2.8-3+deb10u1.
    But decided to not roll the DLA for the package as the CVE got rejected upstream.
  • Issued DLA 3359-1, fixing CVE-2019-13038 and CVE-2021-3639, for libapache2-mod-auth-mellon.
    For Debian 10 buster, these problems have been fixed in version 0.14.2-1+deb10u1.
  • Issued DLA 3360-1, fixing CVE-2021-30151 and CVE-2022-23837, for ruby-sidekiq.
    For Debian 10 buster, these problems have been fixed in version 5.2.3+dfsg-1+deb10u1.
  • Worked on ruby-rails-html-sanitize and added notes to the security-tracker.
    TL;DR: we need newer methods in ruby-loofah to make the patches for ruby-rails-html-sanitize backportable.
  • Started to look at other set of packages meanwhile.

ELTS CVE Fixes and Announcements:
  • Issued ELA 813-1, fixing CVE-2017-12618 and CVE-2022-25147, for apr-util.
    For Debian 8 jessie, these problems have been fixed in version 1.5.4-1+deb8u1.
    For Debian 9 stretch, these problems have been fixed in version 1.5.4-3+deb9u1.
  • Issued ELA 814-1, fixing CVE-2022-39286, for jupyter-core.
    For Debian 9 stretch, these problems have been fixed in version 4.2.1-1+deb9u1.
  • Issued ELA 815-1, fixing CVE-2022-44792 and CVE-2022-44793, for net-snmp.
    For Debian 8 jessie, these problems have been fixed in version 5.7.2.1+dfsg-1+deb8u6.
    For Debian 9 stretch, these problems have been fixed in version 5.7.3+dfsg-1.7+deb9u5.
  • Helped facilitate RabbitMQ s update queries by one of our customers.
  • Started to look at other set of packages meanwhile.

Other (E)LTS Work:
Until next time.
:wq for today.

8 February 2023

Chris Lamb: Most anticipated films of 2023

Very few highly-anticipated movies appear in January and February, as the bigger releases are timed so they can be considered for the Golden Globes in January and the Oscars in late February or early March, so film fans have the advantage of a few weeks after the New Year to collect their thoughts on the year ahead. In other words, I'm not actually late in outlining below the films I'm most looking forward to in 2023...

Barbie No, seriously! If anyone can make a good film about a doll franchise, it's probably Greta Gerwig. Not only was Little Women (2019) more than admirable, the same could be definitely said for Lady Bird (2017). More importantly, I can't help feel she was the real 'Driver' behind Frances Ha (2012), one of the better modern takes on Claudia Weill's revelatory Girlfriends (1978). Still, whenever I remember that Barbie will be a film about a billion-dollar toy and media franchise with a nettlesome history, I recall I rubbished the "Facebook film" that turned into The Social Network (2010). Anyway, the trailer for Barbie is worth watching, if only because it seems like a parody of itself.

Blitz It's difficult to overstate just how important the aerial bombing of London during World War II is crucial to understanding the British psyche, despite it being a constructed phenomenon from the outset. Without wishing to underplay the deaths of over 40,000 civilian deaths, Angus Calder pointed out in the 1990s that the modern mythology surrounding the event "did not evolve spontaneously; it was a propaganda construct directed as much at [then neutral] American opinion as at British." It will therefore be interesting to see how British Grenadian Trinidadian director Steve McQueen addresses a topic so essential to the British self-conception. (Remember the controversy in right-wing circles about the sole Indian soldier in Christopher Nolan's Dunkirk (2017)?) McQueen is perhaps best known for his 12 Years a Slave (2013), but he recently directed a six-part film anthology for the BBC which addressed the realities of post-Empire immigration to Britain, and this leads me to suspect he sees the Blitz and its surrounding mythology with a more critical perspective. But any attempt to complicate the story of World War II will be vigorously opposed in a way that will make the recent hullabaloo surrounding The Crown seem tame. All this is to say that the discourse surrounding this release may be as interesting as the film itself.

Dune, Part II Coming out of the cinema after the first part of Denis Vileneve's adaptation of Dune (2021), I was struck by the conception that it was less of a fresh adaptation of the 1965 novel by Frank Herbert than an attempt to rehabilitate David Lynch's 1984 version and in a broader sense, it was also an attempt to reestablish the primacy of cinema over streaming TV and the myriad of other distractions in our lives. I must admit I'm not a huge fan of the original novel, finding within it a certain prurience regarding hereditary military regimes and writing about them with a certain sense of glee that belies a secret admiration for them... not to mention an eyebrow-raising allegory for the Middle East. Still, Dune, Part II is going to be a fantastic spectacle.

Ferrari It'll be curious to see how this differs substantially from the recent Ford v Ferrari (2019), but given that Michael Mann's Heat (1995) so effectively re-energised the gangster/heist genre, I'm more than willing to kick the tires of this about the founder of the eponymous car manufacturer. I'm in the minority for preferring Mann's Thief (1981) over Heat, in part because the former deals in more abstract themes, so I'd have perhaps prefered to look forward to a more conceptual film from Mann over a story about one specific guy.

How Do You Live There are a few directors one can look forward to watching almost without qualification, and Hayao Miyazaki (My Neighbor Totoro, Kiki's Delivery Service, Princess Mononoke Howl's Moving Castle, etc.) is one of them. And this is especially so given that The Wind Rises (2013) was meant to be the last collaboration between Miyazaki and Studio Ghibli. Let's hope he is able to come out of retirement in another ten years.

Indiana Jones and the Dial of Destiny Given I had a strong dislike of Indiana Jones and the Kingdom of the Crystal Skull (2008), I seriously doubt I will enjoy anything this film has to show me, but with 1981's Raiders of the Lost Ark remaining one of my most treasured films (read my brief homage), I still feel a strong sense of obligation towards the Indiana Jones name, despite it feeling like the copper is being pulled out of the walls of this franchise today.

Kafka I only know Polish filmmaker Agnieszka Holland through her Spoor (2017), an adaptation of Olga Tokarczuk's 2009 eco-crime novel Drive Your Plow Over the Bones of the Dead. I wasn't an unqualified fan of Spoor (nor the book on which it is based), but I am interested in Holland's take on the life of Czech author Franz Kafka, an author enmeshed with twentieth-century art and philosophy, especially that of central Europe. Holland has mentioned she intends to tell the story "as a kind of collage," and I can hope that it is an adventurous take on the over-furrowed biopic genre. Or perhaps Gregor Samsa will awake from uneasy dreams to find himself transformed in his bed into a huge verminous biopic.

The Killer It'll be interesting to see what path David Fincher is taking today, especially after his puzzling and strangely cold Mank (2020) portraying the writing process behind Orson Welles' Citizen Kane (1941). The Killer is said to be a straight-to-Netflix thriller based on the graphic novel about a hired assassin, which makes me think of Fincher's Zodiac (2007), and, of course, Se7en (1995). I'm not as entranced by Fincher as I used to be, but any film with Michael Fassbender and Tilda Swinton (with a score by Trent Reznor) is always going to get my attention.

Killers of the Flower Moon In Killers of the Flower Moon, Martin Scorsese directs an adaptation of a book about the FBI's investigation into a conspiracy to murder Osage tribe members in the early years of the twentieth century in order to deprive them of their oil-rich land. (The only thing more quintessentially American than apple pie is a conspiracy combined with a genocide.) Separate from learning more about this disquieting chapter of American history, I'd love to discover what attracted Scorsese to this particular story: he's one of the few top-level directors who have the ability to lucidly articulate their intentions and motivations.

Napoleon It often strikes me that, despite all of his achievements and fame, it's somehow still possible to claim that Ridley Scott is relatively underrated compared to other directors working at the top level today. Besides that, though, I'm especially interested in this film, not least of all because I just read Tolstoy's War and Peace (read my recent review) and am working my way through the mind-boggling 431-minute Soviet TV adaptation, but also because several auteur filmmakers (including Stanley Kubrick) have tried to make a Napoleon epic and failed.

Oppenheimer In a way, a biopic about the scientist responsible for the atomic bomb and the Manhattan Project seems almost perfect material for Christopher Nolan. He can certainly rely on stars to queue up to be in his movies (Robert Downey Jr., Matt Damon, Kenneth Branagh, etc.), but whilst I'm certain it will be entertaining on many fronts, I fear it will fall into the well-established Nolan mould of yet another single man struggling with obsession, deception and guilt who is trying in vain to balance order and chaos in the world.

The Way of the Wind Marked by philosophical and spiritual overtones, all of Terrence Malick's films are perfumed with themes of transcendence, nature and the inevitable conflict between instinct and reason. My particular favourite is his stunning Days of Heaven (1978), but The Thin Red Line (1998) and A Hidden Life (2019) also touched me ways difficult to relate, and are one of the few films about the Second World War that don't touch off my sensitivity about them (see my remarks about Blitz above). It is therefore somewhat Malickian that his next film will be a biblical drama about the life of Jesus. Given Malick's filmography, I suspect this will be far more subdued than William Wyler's 1959 Ben-Hur and significantly more equivocal in its conviction compared to Paolo Pasolini's ardently progressive The Gospel According to St. Matthew (1964). However, little beyond that can be guessed, and the film may not even appear until 2024 or even 2025.

Zone of Interest I was mesmerised by Jonathan Glazer's Under the Skin (2013), and there is much to admire in his borderline 'revisionist gangster' film Sexy Beast (2000), so I will definitely be on the lookout for this one. The only thing making me hesitate is that Zone of Interest is based on a book by Martin Amis about a romance set inside the Auschwitz concentration camp. I haven't read the book, but Amis has something of a history in his grappling with the history of the twentieth century, and he seems to do it in a way that never sits right with me. But if Paul Verhoeven's Starship Troopers (1997) proves anything at all, it's all in the adaption.

28 January 2023

Russ Allbery: Review: The Library of the Dead

Review: The Library of the Dead, by T.L. Huchu
Series: Edinburgh Nights #1
Publisher: Tor
Copyright: 2021
Printing: 2022
ISBN: 1-250-76777-6
Format: Kindle
Pages: 329
The Library of the Dead is the first book in a post-apocalyptic (sort of) urban fantasy series set in Edinburgh, written by Zimbabwean author (and current Scotland resident) T.L. Huchu. Ropa is a ghosttalker. This means she can see people who have died but are still lingering because they have unfinished business. She can stabilize them and understand what they're saying with the help of her mbira. At the age of fourteen, she's the sole source of income for her small family. She lives with her grandmother and younger sister in a caravan (people in the US call it an RV), paying rent to an enterprising farmer turned landlord. Ropa's Edinburgh is much worse off than ours. Everything is poorer, more run-down, and more tenuous, but other than a few hints about global warming, we never learn the history. It reminded me a bit of the world in Octavia Butler's Parable of the Sower in the feel of civilization crumbling without a specific cause. Unlike that series, The Library of the Dead is not about the collapse or responses to it. The partial ruin of the city is the mostly unremarked backdrop of Ropa's life. Much of the book follows Ropa's daily life carrying messages for ghosts and taking care of her family. She does discover the titular library when a wealthier friend who got a job there shows it off to her, but it has no significant role in the plot. (That was disappointing.) The core plot, once Ropa is convinced by her grandmother to focus on it, is the missing son of a dead woman, who turns out to not be the only missing child. This is urban fantasy with the standard first-person perspective, so Ropa is the narrator. This style of book needs a memorable protagonist, and Ropa is certainly that. She's a talker who takes obvious delight in narrating her own story alongside a constant patter of opinions, observations, and Scottish dialect. Ropa is also poor. That last may not sound that notable; a lot of urban fantasy protagonists are not well-off. But most of them feel culturally middle-class in a way that Ropa does not. Money may be a story constraint in other books, but it rarely feels like a life constraint and experience the way it does here. It's hard to describe the difference in tone succinctly, since it's a lot of small things: the constant presence of money concerns, the frustration of possessions that are stolen or missing and can't be replaced, the tedious chores one has to do when there's no money, even the language and vulgarity Ropa uses. This is rare in fantasy and excellent characterization work. Given that, I am still frustrated with myself over how much I struggled with Ropa as a narrator. She's happy to talk about what is happening to her and what she's learning about (she listens voraciously to non-fiction while running messages), but she deflects, minimizes, or rushes past any mention of what she's feeling. If you don't like the angst that's common from urban fantasy protagonists, this may be the book for you. I have complained about that angst before, and therefore feel like this should have been the book for me, but apparently I need a minimum level of emotional processing and introspection from the narrator. Ropa is utterly unwilling to do any of that. It's possible to piece together what she's feeling and worrying about, but the reader has to rely on hints and oblique comments that she passes over quickly. It didn't help that Ropa is not interested in the same things in her world that I was interested in. She's not an unreliable narrator in the conventional sense; she doesn't lie to the reader or intentionally hide information. And yet, the experience of reading this book was, for me, similar to reading a book with an unreliable narrator. Ropa consistently refused to look at what I wanted her to look at or think about what I wanted her to think about. For example, when she has an opportunity to learn magic through books from the titular library, her initial enthusiasm is infectious. Huchu does a great job showing the excitement of someone who likes new ideas and likes telling other people about the neat things she just learned. But when things don't work the way she expected from the books, she doesn't follow up, experiment, or try to understand why. When her grandmother tries to explain something to her from a different angle, she blows her off and refuses to pay attention. And when she does get magic to work, she never tries to connect that to her previous understanding. I kept waiting for Ropa to try to build her own mental model of magic, but she would only toy with an idea for a few pages and then put it down and never mention it again. This is not a fault in the book, just a mismatch between the book and what I wanted to read. All of this is consistent with Ropa's defensive strategies, emotional resiliency, and approach to understanding the world. (I strongly suspect Huchu was giving Ropa some ADHD characteristics, and if so, I think he got it spot on.) Given that, I tried to pivot to appreciating the characterization and the world, but that ran into another mismatch I had with this book, and the reason why I passed on it when it initially came out. I tend to avoid fantasy novels about ghosts. This is not because I mind ghosts themselves, but I've learned from experience that authors who write about ghosts usually also write about other things that I don't want to read about. That unfortunately was the case here; The Library of the Dead was too far into horror for me. There's child abuse, drugs, body horror, and similar nastiness here, more than I wanted in my head. Ropa's full-speed-ahead attitude and refusal to dwell on anything made it a bit easier to read, but it was still too much for me. Ropa is a great character who is refreshingly different than the typical urban fantasy protagonist, and the few hints of the magical library and world background we get were intriguing. This book was not for me, but I can see why other people will love it. Followed by Our Lady of Mysterious Ailments. Rating: 6 out of 10

26 January 2023

Matt Brown: Goals for 2023

This is the second of a two-part post covering my goals for 2023. See the first part to understand the vision, mission and strategy driving these goals. I want to thank my friend Nat, and Will Larson whose annual reviews I ve always enjoyed reading for inspiring me to write these posts. I ve found the process articulating my motivations and goals very useful to clarify my thoughts and create tangible next steps. I m grateful for that in and of itself, but I also hope that by publishing this you too might find it interesting, and the additional public accountability it creates will be a positive encouragement to me.

2023 Goals My focus for 2023 is to bootstrap a business that I can use to build software that solves real problems (see the strategy from the previous post for more details on this). I m going to track this via three goals:
  1. Execute a series of successful consulting engagements, building a reputation for myself and leaving happy customers willing to provide testimonials that support a pipeline of future opportunities.
  2. Grow my product development skill set by taking several ideas to MVP stage with customer feedback received, and launch at least one product which generates revenue and has growth potential.
  3. Develop and maintain a broad professional network.

Consulting Based on my background and experience, I plan to target my consulting across three areas:
  1. Leadership - building and growing operationally focused software teams following SRE/devops principles. A typical engagement may involve helping a client establish a brand new SRE/devops practice, or to strengthen and mature the existing practices used to build and operate reliable software in their team(s).
  2. Architecture - applying deep technical expertise to the design of large software systems, particularly focusing on their reliability and operability. A typical engagement may involve design input and decision making support for key aspects of a new system, providing external review and analysis to improve an existing system, or delivering actionable, tactical next steps during or immediately after a reliability crisis.
  3. Technology Strategy - translating high-level business needs into a technical roadmap that provides understandable explanations of the value software can deliver in that context, and the iterative series of appropriately sized projects required to realise it. A typical customer for this would be a small to medium sized business outside of the software industry with a desire to use software in a transformative way to improve their business but who does not employ the necessary in-house expertise to lead that transition.

Product Development There are three, currently extremely high level, product ideas that I m excited to explore:
  1. Improve co-ordination of electricity resources to accelerate the electrification of NZ s energy demand and the transition to zero carbon grid. NZ has huge potential to be a world-leader in decarbonising energy use through electrification, but requires a massive transition to realise the benefits. Many of the challenges to that transition involve coordination of an order of magnitude more distributed energy resources (DER) in a much more dynamic and software-oriented manner than the electricity industry is traditionally experienced with. The concept of improving DER coordination is not novel, but our grid has unique characteristics that mean we re likely to need to build localised solutions. There is a strong match between my experience with large, high-reliability distributed software systems, and this need. With renewed motivation in the industry for rapid progress and many conversations and consultations still in their early stages this a very compelling space to explore with the intent of developing a more detailed product opportunity to pursue.
  2. Reduce agricultural emissions by making high performance farm management, including effortless compliance reporting, straightforward, fun and effective for busy farmers. NZ s commitments to reduce agricultural emissions (our largest single sector) place increased compliance and reporting burdens on busy farmers who don t want to report the same data multiple times to different regulators and authorities. In tandem, rising business costs and constraints drive a need for continuous improvements in efficiency, performance and farm management processes in order to remain profitable. This in turn drives increases in complexity and the volume of data that farmers must work with. Many industry organisations and associated software developers offer existing products aimed at addressing aspects of these problems, but anecdotal feedback indicates these are poorly integrated, piecemeal solutions that are often frustrating to use - a burden rather than a source of continuous improvement. It looks like there could be an opportunity for a delightful, comprehensive farm management and reporting system to disrupt the industry and help farmers run more profitable and sustainable farms while also reducing compliance costs and effort.
  3. Lower sickness rates and improve cognitive performance by enabling every indoor space to benefit from continuous ventilation monitoring and reporting. Indoor air quality is important in reducing disease transmission risk and promoting optimal cognitive performance, but despite the current pandemic temporarily raising its profile, a focus on indoor air quality generally remains under the radar for most people. One factor contributing to this is the lack of widely available systems for continuously monitoring and reporting on air quality. I built https://co2mon.nz/ to help address this problem in my children s school during 2022. I see potential to further grow this business through marketing and raising awareness of the value of ventilation monitoring in all indoor environments.
In addition to these mission aligned product ideas, I m also interested in exploring the creation of small to medium sized SaaS applications that deliver useful value by serving the needs of a specialised or niche business or industry. Even when not directly linked to the overall mission, the development and operation of products of this type can support the strategy. Each application adds direct revenue and also contributes to achieving better economies of scale in the many backend processes and infrastructure required to deliver secure, reliable and performant software systems.

Developing my professional network To help make this goal more actionable and measurable I will track 3 sub goals:
  1. To build a professional relationship with at least 30 new people this year, meaning that we ve met and had a decent conversation at least a couple of times and keep in touch at least every few months in some form.
  2. To publish a piece of writing on this site at least once a week, and for many of those to generate interesting conversations and feedback. I ll use this as an opportunity to explore product ideas, highlight my consulting expertise and generally contribute interesting technical content into the world.
  3. To support the growth of my local technical community by volunteering my experience and knowledge with others through activities such as mentoring, conference talks and similar.

Next Steps Over the coming weeks I ll write more about each of these topics - you can use the box in the sidebar (or on the front page, if you re on a phone) to be notified when I post new writing (there s also an RSS feed here, for the geeks). I d love to have your feedback and engagement on these goals too - please drop me an email with your thoughts or even book a meeting - it won t be a distraction to me, you ll be helping me meet my goal of developing and maintaining my network :)

16 January 2023

Russ Allbery: Review: The Truth

Review: The Truth, by Terry Pratchett
Series: Discworld #25
Publisher: Harper
Copyright: November 2000
Printing: August 2014
ISBN: 0-06-230736-3
Format: Mass market
Pages: 435
The Truth is the 25th Discworld novel. Some reading order guides group it loosely into an "industrial revolution" sequence following Moving Pictures, but while there are thematic similarities I'll talk about in a moment, there's no real plot continuity. You could arguably start reading Discworld here, although you'd be spoiled for some character developments in the early Watch novels. William de Worde is paid to write a newsletter. That's not precisely what he calls it, and it's not clear whether his patrons know that he publishes it that way. He's paid to report on news of Ankh-Morpork that may be of interest of various rich or influential people who are not in Ankh-Morpork, and he discovered the best way to optimize this was to write a template of the newsletter, bring it to an engraver to make a plate of it, and run off copies for each of his customers, with some minor hand-written customization. It's a comfortable living for the estranged younger son of a wealthy noble. As the story opens, William is dutifully recording the rumor that dwarfs have discovered how to turn lead into gold. The rumor is true, although not in the way that one might initially assume.
The world is made up of four elements: Earth, Air, Fire, and Water. This is a fact well known even to Corporal Nobbs. It's also wrong. There's a fifth element, and generally it's called Surprise. For example, the dwarfs found out how to turn lead into gold by doing it the hard way. The difference between that and the easy way is that the hard way works.
The dwarfs used the lead to make a movable type printing press, which is about to turn William de Worde's small-scale, hand-crafted newsletter into a newspaper. The movable type printing press is not unknown technology. It's banned technology, because the powers that be in Ankh-Morpork know enough to be deeply suspicious of it. The religious establishment doesn't like it because words are too important and powerful to automate. The nobles and the Watch don't like it because cheap words cause problems. And the engraver's guild doesn't like it for obvious reasons. However, Lord Vetinari knows that one cannot apply brakes to a volcano, and commerce with the dwarfs is very important to the city. The dwarfs can continue. At least for now. As in Moving Pictures, most of The Truth is an idiosyncratic speedrun of the social effects of a new technology, this time newspapers. William has no grand plan; he's just an observant man who likes to write, cares a lot about the truth, and accidentally stumbles into editing a newspaper. (This, plus being an estranged son of a rich family, feels very on-point for journalism.) His naive belief is that people want to read true things, since that's what his original patrons wanted. Truth, however, may not be in the top five things people want from a newspaper. This setup requires some narrative force to push it along, which is provided by a plot to depose Vetinari by framing him for murder. The most interesting part of that story is Mr. Pin and Mr. Tulip, the people hired to do the framing and then dispose of the evidence. They're a classic villain type: the brains and the brawn, dangerous, terrifying, and willing to do horrible things to people. But one thing Pratchett excels at is taking a standard character type, turning it a bit sideways, and stuffing in things that one wouldn't think would belong. In this case, that's Mr. Tulip's deep appreciation for, and genius grasp of, fine art. It should not work to have the looming, awful person with anger issues be able to identify the exact heritage of every sculpture and fine piece of goldsmithing, and yet somehow it does. Also as in Moving Pictures (and, in a different way, Soul Music), Pratchett tends to anthropomorphize technology, giving it a life and motivations of its own. In this case, that's William's growing perception of the press as an insatiable maw into which one has to feed words. I'm usually dubious of shifting agency from humans to things when doing social analysis (and there's a lot of social analysis here), but I have to concede that Pratchett captures something deeply true about the experience of feedback loops with an audience. A lot of what Pratchett puts into this book about the problematic relationship between a popular press and the truth is obvious and familiar, but he also makes some subtle points about the way the medium shapes what people expect from it and how people produce content for it that are worthy of Marshall McLuhan. The interactions between William and the Watch were less satisfying. In our world, the US press is, with only rare exceptions, a thoughtless PR organ for police propaganda and the exonerative tense. Pratchett tackles that here... sort of. William vaguely grasps that his job as a reporter may be contrary to the job of the Watch to maintain order, and Vimes's ambivalent feelings towards "solving crimes" push the story in that direction. But this is also Vimes, who is clearly established as one of the good sort and therefore is a bad vehicle for talking about how the police corrupt the press. Pratchett has Vimes and Vetinari tacitly encourage William, which works within the story but takes the pressure off the conflict and leaves William well short of understanding the underlying politics. There's a lot more that could be said about the tension between the press and the authorities, but I think the Discworld setup isn't suitable for it. This is the sort of book that benefits from twenty-four volumes of backstory and practice. Pratchett's Ankh-Morpork cast ticks along like a well-oiled machine, which frees up space that would otherwise have to be spent on establishing secondary characters. The result is a lot of plot and social analysis shoved into a standard-length Discworld novel, and a story that's hard to put down. The balance between humor and plot is just about perfect, the references and allusions aren't overwhelming, and the supporting characters, both new and old, are excellent. We even get a good Death sequence. This is solid, consistent stuff: Discworld as a mature, well-developed setting with plenty of stories left to tell. Followed by Thief of Time in publication order, and later by Monstrous Regiment in the vaguely-connected industrial revolution sequence. Rating: 8 out of 10

8 January 2023

Scarlett Gately Moore: Debian: Coming soon! MycroftAI! KDE snaps update.

I am excited to announce that I have joined the MycroftAI team in Salsa and working hard to get this packaged up and released in Debian. You can track our progress here: https://salsa.debian.org/mycroftai-team Snaps are on temporary hold while we get everything switched over to core22. This includes the neon-extension, that requires merges and store requests to be honored. Hopefully folks are returning from holidays and things will start moving again. Thank you for your patience! I am still seeking work if you or anyone you know is willing to give me a chance to shine, you won t regret it, life has leveled out and I am ready to make great things happen! I admit the interviewing process is much more difficult than in years past, any advice here is also appreciated. Thank you for stopping by.

Russ Allbery: Review: Postwar

Review: Postwar, by Tony Judt
Publisher: Penguin Books
Copyright: 2005
ISBN: 1-4406-2476-3
Format: Kindle
Pages: 835
Tony Judt (1948 2010) was a British-American historian and Erich Maria Remarque Professor in European Studies at New York University. Postwar is his magnum opus, a history of Europe from 1945 to 2005. A book described as a history of Europe could be anything from a textbook to a political analysis, so the first useful question to ask is what sort of history. That's a somewhat difficult question to answer. Postwar mentions a great deal of conventional history, including important political movements and changes of government, but despite a stated topic that would suit a survey textbook, it doesn't provide that sort of list of facts and dates. Judt expects the reader to already be familiar with the broad outlines of modern European history. However, Postwar is also not a specialty history and avoids diving too deep into any one area. Trends in art, philosophy, and economics are all mentioned to set a broader context, but still only at the level of a general survey. My best description is that Postwar is a comprehensive social and political history that attempts to focus less on specific events and more on larger trends of thought. Judt grounds his narrative in concrete, factual events, but the emphasis is on how those living in Europe, at each point in history, thought of their society, their politics, and their place in both. Most of the space goes to exploring those nuances of thought and day-to-day life. In the US university context, I'd place this book as an intermediate-level course in modern European history, after the survey course that provides students with a basic framework but before graduate-level specializations in specific topics. If you have not had a solid basic education in European history (and my guess is that most people from the US have not), Judt will provide the necessary signposts, but you should expect to need to look up the signposts you don't recognize. I, as the dubious beneficiary of a US high school history education now many decades in the past, frequently resorted to Wikipedia for additional background. Postwar uses a simple chronological structure in four parts: the immediate post-war years and the beginning of the Cold War (1945 1953), the era of rapidly growing western European prosperity (1953 1971), the years of recession and increased turmoil leading up to the collapse of communism (1971 1989), and the aftermath of the collapse of communism and the rise of the European Union (1989 2005). Each part is divided into four to eight long chapters that trace a particular theme. Judt usually starts with the overview of a theme and then follows the local manifestations of it on a spiral through European countries in whatever order seems appropriate. For the bulk of the book that covers the era of the Cold War, when experiences were drastically different inside or outside the Soviet bloc, he usually separates western and eastern Europe into alternating chapters. Reviewing this sort of book is tricky because so much will depend on how well you already know the topic. My interest in history is strictly amateur and I tend to avoid modern history (usually I find it too depressing), so for me this book was remedial, filling in large knowledge gaps that I ideally shouldn't have had. Postwar was a runner up for the Pulitzer Prize for General Non-Fiction, so I think I'm safe saying you won't go far wrong reading it, but here's the necessary disclaimer that the rest of my reactions may not be useful if you're better versed in modern European history than I was. (This would not be difficult.) That said, I found Postwar invaluable because of its big-picture focus. The events and dates are easy enough to find on the Internet; what was missing for me in understanding Europe was the intent and social structures created by and causing those events. For example, from early in the book:
On one thing, however, all were agreed resisters and politicians alike: "planning". The disasters of the inter-war decades the missed opportunities after 1918, the great depression that followed the stock-market crash of 1929, the waste of unemployment, the inequalities, injustices and inefficiencies of laissez-faire capitalism that had led so many into authoritarian temptation, the brazen indifference of an arrogant ruling elite and the incompetence of an inadequate political class all seemed to be connected by the utter failure to organize society better. If democracy was to work, if it was to recover its appeal, it would have to be planned.
It's one thing to be familiar with the basic economic and political arguments between degrees of free market and planned economies. It's quite another to understand how the appeal of one approach or the discredit of another stems from recent historical experience, and that's what a good history can provide. Judt does not hesitate to draw these sorts of conclusions, and I'm sure some of them are controversial. But while he's opinionated, he's rarely ideological, and he offers no grand explanations. His discussion of the Yugoslav Wars stands out as an example: he mentions various theories of blame (a fraught local ethnic history, the decision by others to not intervene until the situation was truly dire), but largely discards them. Judt's offered explanation is that local politicians saw an opportunity to gain power by inflaming ethnic animosity, and a large portion of the population participated in this process, either passively or eagerly. Other explanations are both unnecessarily complex and too willing to deprive Yugoslavs of agency. I found this refreshingly blunt. When is more complex analysis a way to diffuse responsibility and cling to an ideological fantasy that the right foreign policy would have resolved a problem? A few personal grumblings do creep in, particularly in the chapters on the 1970s (and I think it's not a coincidence that this matches Judt's own young adulthood, a time when one is prone to forming a lot of opinions). There is a brief but stinging criticism of postmodernism in scholarship, which I thought was justified but probably incomplete, and a decidedly grumpy dismissal of punk music, which I thought was less fair. But these are brief asides that don't detract from the overall work. Indeed, they, along with the occasional wry asides ("respecting long-established European practice, no one asked the Poles for their views [on Poland's new frontiers]") add a lot of character. Insofar as this book has a thesis, it's in the implications of the title: Europe only exited the postwar period at the end of the 20th century. Political stability through exhaustion, the overwhelming urgency of economic recovery, and the degree to which the Iron Curtain and the Cold War froze eastern Europe in amber meant that full European recovery from World War II was drawn out and at times suspended. It's only after 1989 and its subsequent upheavals that European politics were able to move beyond postwar concerns. Some of that movement was a reemergence of earlier European politics of nations and ethnic conflict. But, new on the scene, was a sense of identity as Europeans, one that western Europe circled warily and eastern Europe saw as the only realistic path forward.
What binds Europeans together, even when they are deeply critical of some aspect or other of its practical workings, is what it has become conventional to call in disjunctive but revealing contrast with "the American way of life" the "European model of society".
Judt also gave me a new appreciation of how traumatic people find the assignment of fault, and how difficult it is to wrestle with guilt without providing open invitations to political backlash. People will go to great lengths to not feel guilty, and pressing the point runs a substantial risk of creating popular support for ideological movements that are willing to lie to their followers. The book's most memorable treatment of this observation is in the epilogue, which traces popular European attitudes towards the history of the Holocaust through the whole time period. The largest problem with this book is that it is dense and very long. I'm a fairly fast reader, but this was the only book I read through most of my holiday vacation and it still took a full week into the new year to finish it. By the end, I admit I was somewhat exhausted and ready to be finished with European history for a while (although the epilogue is very much worth waiting for). If you, unlike me, can read a book slowly among other things, that may be a good tactic. But despite feeling like this was a slog at times, I'm very glad that I read it. I'm not sure if someone with a firmer grounding in European history would have gotten as much out of it, but I, at least, needed something this comprehensive to wrap my mind around the timeline and fill in some embarrassing gaps. Judt is not the most entertaining writer (although he has his moments), and this is not the sort of popular history that goes out of its way to draw you in, but I found it approachable and clear. If you're looking for a solid survey of modern European history with this type of high-level focus, recommended. Rating: 8 out of 10

29 December 2022

Chris Lamb: Favourite books of 2022: Memoir/biography

In my two most recent posts, I listed the fiction and classic fiction I enjoyed the most in 2022. I'll leave my roundup of general non-fiction until tomorrow, but today I'll be going over my favourite memoirs and biographies, in no particular order. Books that just missed the cut here include Roisin Kiberd's The Disconnect: A Personal Journey Through the Internet (2019), Steve Richards' The Prime Ministers (2019) which reflects on UK leadership from Harold Wilson to Boris Johnson, Robert Graves Great War memoir Goodbye to All That (1929) and David Mikics's portrait of Stanley Kubrick called American Filmmaker.

Afropean: Notes from Black Europe (2019) Johny Pitts Johny Pitts is a photographer and writer who lives in the north of England who set out to explore "black Europe from the street up" those districts within European cities that, although they were once 'white spaces' in the past, they are now occupied by Black people. Unhappy with the framing of the Black experience back home in post-industrial Sheffield, Pitts decided to become a nomad and goes abroad to seek out the sense of belonging he cannot find in post-Brexit Britain, and Afropean details his journey through Paris, Brussels, Lisbon, Berlin, Stockholm and Moscow. However, Pitts isn't just avoiding the polarisation and structural racism embedded in contemporary British life. Rather, he is seeking a kind of super-national community that transcends the reductive and limiting nationalisms of all European countries, most of which have based their national story on a self-serving mix of nostalgia and postcolonial fairy tales. Indeed, the term 'Afropean' is the key to understanding the goal of this captivating memoir. Pitts writes at the beginning of this book that the word wasn't driven only as a response to the crude nativisms of Nigel Farage and Marine Le Pen, but that it:
encouraged me to think of myself as whole and unhyphenated. [ ] Here was a space where blackness was taking part in shaping European identity at large. It suggested the possibility of living in and with more than one idea: Africa and Europe, or, by extension, the Global South and the West, without being mixed-this, half-that or black-other. That being black in Europe didn t necessarily mean being an immigrant.
In search of this whole new theory of home, Pitts travels to the infamous banlieue of Clichy-sous-Bois just to the East of Paris, thence to Matong in Brussels, as well as a quick and abortive trip into Moscow and other parallel communities throughout the continent. In these disparate environs, Pitts strikes up countless conversations with regular folk in order to hear their quotidian stories of living, and ultimately to move away from the idea that Black history is defined exclusively by slavery. Indeed, to Pitts, the idea of race is one that ultimately restricts one's humanity; the concept "is often forced to embody and speak for certain ideas, despite the fact it can't ever hold in both hands the full spectrum of a human life and the cultural nuances it creates." It's difficult to do justice to the effectiveness of the conversations Pitts has throughout his travels, but his shrewd attention to demeanour, language, raiment and expression vividly brings alive the people he talks to. Of related interest to fellow Brits as well are the many astute observations and comparisons with Black and working-class British life. The tone shifts quite often throughout this book. There might be an amusing aside one minute, such as the portrait of an African American tourist in Paris to whom "the whole city was a film set, with even its homeless people appearing to him as something oddly picturesque." But the register abruptly changes when he visits Clichy-sous-Bois on an anniversary of important to the area, and an element of genuine danger is introduced when Johny briefly visits Moscow and barely gets out alive. What's especially remarkable about this book is there is a freshness to Pitt s treatment of many well-worn subjects. This can be seen in his account of Belgium under the reign of Leopold II, the history of Portuguese colonialism (actually mostly unknown to me), as well in the way Pitts' own attitude to contemporary anti-fascist movements changes throughout an Antifa march. This chapter was an especial delight, and not only because it underlined just how much of Johny's trip was an inner journey of an author willing have his mind changed. Although Johny travels alone throughout his journey, in the second half of the book, Pitts becomes increasingly accompanied by a number of Black intellectuals by the selective citing of Frantz Fanon and James Baldwin and Caryl Phillips. (Nevertheless, Jonny has also brought his camera for the journey as well, adding a personal touch to this already highly-intimate book.) I suspect that his increasing exercise of Black intellectual writing in the latter half of the book may be because Pitts' hopes about 'Afropean' existence ever becoming a reality are continually dashed and undercut. The unity among potential Afropeans appears more-and-more unrealisable as the narrative unfolds, the various reasons of which Johny explores both prosaically and poetically. Indeed, by the end of the book, it's unclear whether Johny has managed to find what he left the shores of England to find. But his mix of history, sociology and observation of other cultures right on my doorstep was something of a revelation to me.

Orwell's Roses (2021) Rebecca Solnit Orwell s Roses is an alternative journey through the life and afterlife of George Orwell, reimaging his life primarily through the lens of his attentiveness to nature. Yet this framing of the book as an 'alternative' history is only revisionist if we compare it to the usual view of Orwell as a bastion of 'free speech' and English 'common sense' the roses of the title of this book were very much planted by Orwell in his Hertfordshire garden in 1936, and his yearning of nature one was one of the many constants throughout his life. Indeed, Orwell wrote about wildlife and outdoor life whenever he could get away with it, taking pleasure in a blackbird's song and waxing nostalgically about the English countryside in his 1939 novel Coming Up for Air (reviewed yesterday).
By sheer chance, I actually visited this exact garden immediately to the publication of this book
Solnit has a particular ability to evince unexpected connections between Orwell and the things he was writing about: Joseph Stalin's obsession with forcing lemons to grow in ludicrously cold climates; Orwell s slave-owning ancestors in Jamaica; Jamaica Kincaid's critique of colonialism in the flower garden; and the exploitative rose industry in Colombia that supplies the American market. Solnit introduces all of these new correspondences in a voice that feels like a breath of fresh air after decades of stodgy Orwellania, and without lapsing into a kind of verbal soft-focus. Indeed, the book displays a marked indifference towards the usual (male-centric) Orwell fandom. Her book draws to a close with a rereading of the 'dystopian' Nineteen Eighty-Four that completes her touching portrait of a more optimistic and hopeful Orwell, as well as a reflection on beauty and a manifesto for experiencing joy as an act of resistance.

The Disaster Artist (2013) Greg Sestero & Tom Bissell For those not already in the know, The Room was a 2003 film by director-producer-writer-actor Tommy Wiseau, an inscrutable Polish immigr with an impenetrable background, an idiosyncratic choice of wardrobe and a mysterious large source of income. The film, which centres on a melodramatic love triangle, has since been described by several commentators and publications as one of the worst films ever made. Tommy's production completely bombed at the so-called 'box office' (the release was actually funded entirely by Wiseau personally), but the film slowly became a favourite at cult cinema screenings. Given Tommy's prominent and central role in the film, there was always an inherent cruelty involved in indulging in the spectacle of The Room the audience was laughing because the film was astonishingly bad, of course, but Wiseau infused his film with sincere earnestness that in a heartless twist of irony may be precisely why it is so terrible to begin with. Indeed, it should be stressed that The Room is not simply a 'bad' film, and therefore not worth paying any attention to: it is uncannily bad in a way that makes it eerily compelling to watch. It unintentionally subverts all the rules of filmmaking in a way that captivates the attention. Take this representative example:
This thirty-six-second scene showcases almost every problem in The Room: the acting, the lighting, the sound design, the pacing, the dialogue and that this unnecessary scene (which does not advance the plot) even exists in the first place. One problem that the above clip doesn't capture, however, is Tommy's vulnerable ego. (He would later make the potentially conflicting claims that The Room was both an ironic cult success and that he is okay with people interpreting it sincerely). Indeed, the filmmaker's central role as Johnny (along with his Willy-Wonka meets Dracula persona) doesn't strike viewers as yet another vanity project, it actually asks more questions than it answers. Why did Tommy even make this film? What is driving him psychologically? And why and how? is he so spellbinding? On the surface, then, 2013's The Disaster Artist is a book about the making of one the strangest films ever made, written by The Room's co-star Greg Sestero and journalist Tom Bissell. Naturally, you learn some jaw-dropping facts about the production and inspiration of the film, the seed of which was planted when Greg and Tommy went to see an early screening of The Talented Mr Ripley (1999). It turns out that Greg's character in The Room is based on Tommy's idiosyncratic misinterpretation of its plot, extending even to the character's name Mark who, in textbook Tommy style, was taken directly (or at least Tommy believed) from one of Ripley's movie stars: "Mark Damon" [sic]. Almost as absorbing as The Room itself, The Disaster Artist is partly a memoir about Thomas P. Wiseau and his cinematic masterpiece. But it could also be described as a biography about a dysfunctional male relationship and, almost certainly entirely unconsciously, a text about the limitations of hetronormativity. It is this latter element that struck me the most whilst reading this book: if you take a step back for a moment, there is something uniquely sad about Tommy's inability to connect with others, and then, when Wiseau poured his soul into his film people just laughed. Despite the stories about his atrocious behaviour both on and off the film set, there's something deeply tragic about the whole affair. Jean-Luc Godard, who passed away earlier this year, once observed that every fictional film is a documentary of its actors. The Disaster Artist shows that this well-worn aphorism doesn't begin to cover it.

20 December 2022

Freexian Collaborators: Recent improvements to Tryton's Debian Packaging (by Mathias Behrle and Rapha l Hertzog)

Foreword Freexian has been using Tryton for a few years to handle its invoicing and accounting. We have thus also been using the Debian packages maintained by Mathias Behrle and we have been funding some of his work because maintaining an ERP with more than 50 source packages was too much for him to handle alone on his free time. When Mathias discovered our Project Funding initiative, it was quite natural for him to consider applying to be able to bring some much needed improvements to Tryton s Debian packaging. He s running his own consulting company (MBSolutions) so it s easy for him to invoice Freexian to get the money for the funded projects. What follows is Mathias Behrle s description of the projects that he submitted and of the work that he achieved. If you want to contact him, you can reach out to mathiasb@m9s.biz or mbehrle@debian.org. You can also follow him on Mastodon.

Report In January 2022 I applied for two projects in the Freexian Project Funding Initiative.
  • Tryton Project 1
    • The starting point of this project was Debian Bug #998319: tryton-server should provide a ready-to-use production-grade server config. To address this problem instead of only providing configuration snippets the idea was to provide a full featured guided setup of a Tryton production environment, thus eliminating the risks of trial and error for the system administrator.
  • Tryton Project 2
    • The goal of this project was to complete the available Tryton modules in Debian main with the latest set available from tryton.org and to automate the task of creating new Debian packages from Tryton modules as much as possible.

Accomplishments As the result of Task 1, several new packages emerged:
  • tryton-server-postgresql provides the guided setup of a PostgreSQL database backend.
  • tryton-server-uwsgi provides the installation and configuration of a robust WSGI server on top of tryton-server.
  • tryton-server-nginx provides the configuration of a scalable web frontend to the uwsgi server, including the optional setup of secure access by Letsencrypt certificates.
  • tryton-server-all-in-one puts it all together to provide a fully functional Tryton production environment, including a database filled with basic static data. With the installation of this package a robust and secure production grade setup is possible from scratch, all configuration leg work is done in the background.
The work was thoroughly reviewed by Neil Williams. Thanks go to him for his detailed feedback providing very valuable information from the view of a fresh Tryton user getting in first contact with the software. A cordial thank you as well goes to the translation teams providing initial reviews and translations for the configuration questions. The efforts of Task 1 were completed with Task 2:
  • A Tryton specific version of PyPi2deb was created to help in the preparation of new Debian packages for new Tryton modules.
  • All missing Tryton modules for the current series were packaged for Debian.
On top of those two planned projects, I completed an additional task: the packaging of the Tryton Web Client. The Web Client is a quite important feature to access a Tryton server with the browser and even a crucial requirement for some companies. Unfortunately the packaging of the Web Client for Debian was problematic from the beginning. tryton-sao requires exact versions of JavaScript libraries that are almost never guaranteed to be available in the different targeted Debian releases. Therefore a package with vendored libraries has been created and will hopefully soon hit the Debian main archive. The package is already available from the Tryton Backport Mirror for the usually supported Debian releases.

Summary I am very pleased that the Tryton suite in Debian has gained full coverage of Tryton modules and a user-friendly installation. The completion of the project represents a huge step forward in the state-of-the-art deployment of a production grade Tryton environment. Without the monetary support of Freexian s project funding the realization of this project wouldn t have been possible in this way and to this extent.

30 November 2022

Bits from Debian: New Debian Developers and Maintainers (September and October 2022)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

Russell Coker: Links November 2022

Here s the US Senate Statement of Frances Haugen who used to work for Facebook countering misinformation and espionage [1]. She believes that Facebook is capable of dealing with the online radicalisation and promotion of bad things on it s platform but is unwilling to do so for financial reasons. We need strong regulation of Facebook and it probably needs to be broken up. Interesting article from The Atlantic about filtered cigarettes being more unhealthy than unfiltered [2]. Every time I think I know how evil tobacco companies are I get surprised by some new evidence. Cory Doctorow wrote an insightful article about resistance to rubber hose cryptanalysis [3]. Cory Doctorow wrote an interesting article When Automation Becomes Enforcement with a new way of thinking about Snapchat etc [4]. Cory Doctorow wrote an insightful and informative article Big Tech Isn t Stealing News Publishers Content, It s Stealing Their Money [5] which should be read by politicians from all countries that are trying to restrict quoting news on the Internet. Interesting articl;e on Santiago Genoves who could be considered as a pioneer of reality TV for deliberately creating disputes between a group of young men and women on a raft in the Atlantic for 3 months [6]. Matthew Garrett wrote an interesting review of the Freedom Phone, seems that it s not good for privacy and linked to some companies doing weird stuff [7]. Definitely worth reading. Cory Doctorow wrote an interesting and amusing article about backdoors for machine learning [8] Petter Reinholdtsen wrote an informative post on how to make a bootable USB stick image from an ISO file [9]. Apparently Lenovo provides ISO images to update laptops that don t have DVD drives. :( Barry Gander wrote an interesting article about the fall of Rome and the decline of the US [10]. It s a great concern that the US might fail in the same way as Rome. Ethan Siegel wrote an interesting article about Iapetus, a moon of Saturn that is one of the strangest objects in the solar system [11]. Cory Doctorow s article Revenge of the Chickenized Reverse-Centaurs has some good insights into the horrible things that companies like Amazon are doing to their employees and how we can correct that [12]. Charles Stross wrote an insightful blog post about Billionaires [13]. They can t do much for themselves with the extra money beyond about $10m or $100m (EG Steve Jobs was unable to extend his own life much when he had cancer) and their money is trivial when compared to the global economy. They are however effective parasites capable of performing great damage to the country that hosts them. Cory Doctorow has an interesting article about how John Deere is being evil again [14]. This time with potentially catastrophic results.

24 November 2022

Russ Allbery: Review: Servant Mage

Review: Servant Mage, by Kate Elliott
Publisher: Tordotcom
Copyright: 2022
ISBN: 1-250-76904-3
Format: Kindle
Pages: 165
Servant Mage is a (so far at least) standalone fantasy novella. Fellian is a servant mage, which means that she makes Lamps for the customers of the inn, cleans the privies, and watches her effective owners find ways to put her deeper into indentured servitude. That's the only life permitted by the August Protector to those found to carry the taint of magical talent, caused by (it is said) demons who have bound themselves to their souls. Fellian's effective resistance is limited to giving covert reading lessons. Or was, before she is hired by a handsome man who needs a Lamplighter. A man who has been looking for her specifically, is a magical Adept, and looks suspiciously like a soldier for the despised and overthrown monarchists. A difficulty with writing a story that reverses cliches is that you have to establish the cliches in order to reverse them, which runs the risk that a lot of the story may feel cliched. I fear some of that happened here. Magic, in this world, is divided into elemental spheres, each of which has been restricted to limited and practical tasks by the Liberationists. The new regime searches the population for the mage-gifted and forces them into public service for the good of all (or at least that's how they describe it), with as little education as possible. Fellian was taught to light Lamps, but what she has is fire magic, and she's worked out some additional techniques on her own. The Adept is air, and one of the soldiers with him is earth. If you're guessing that two more elements turn up shortly and something important happens if you get all five of them together, you're perhaps sensing a bit of unoriginality in the magic system. That's not the cliche that's the primary target of this story, though. The current rulers of this country, led by the austere August Protector, are dictatorial anti-monarchists who are happy to force mages into indenture and deny people schooling. Fellian has indeed fallen in with the monarchists, who unsurprisingly are attempting to reverse the revolution. They, unlike the Liberationists, respect mages and are willing to train them, and they would like to recruit Fellian. I won't spoil the details of where Elliott is going with the plot, but it does eventually go somewhere refreshingly different. The path to get there, though, is familiar from any number of fantasy epics that start with a slave with special powers. Servant Mage is more aware of this than most, and Fellian is sharp-tongued and suspicious rather than innocent and trainable, but there are a lot of familiar character tropes and generic fantasy politics. This is the second story (along with the Spiritwalker trilogy) of Elliott's I've read that uses the French Revolution as a political model but fails to provide satisfying political depth. This one is a novella and can therefore be forgiven for not having the time to dive into revolutionary politics, but I wish Elliott would do more with this idea. Here, the anti-monarchists are straight-up villains, and while that's partly setup for more nuance than you might be expecting, it still felt like a waste of the setting. I want the book that tackles the hard social problem of reconciling the chaos and hopefulness of political revolution with magical powers that can be dangerous and oppressive without the structure of tradition. It feels like Elliott keeps edging towards that book but hasn't found the right hook to write it. Instead, we get a solid main character in Fellian, a bunch of supporting characters who mostly register as "okay," some magical set pieces that have all the right elements and yet didn't grab my sense of wonder, and a story that seemed heavily signposted. The conclusion was the best part of the story, but by the time we got there it wasn't as much of a surprise as I wanted it to be. I had this feeling with the Spiritwalker series as well: the pieces making up the story are of good quality, and Elliott's writing is solid, but the narrative magic never quite coheres for me. It's the sort of novella where I finished reading, went "huh," and then was excited to start a new book. I have no idea if there are plans for more stories in this universe, but Servant Mage felt like a prelude to a longer series. If that series does materialize, there are some signs that it could be more satisfying. At the end of the story, Fellian is finally in a place to do something original and different, and I am mildly curious what she might do. Maybe enough to read the next book, if one turns up. Mostly, though, I'm waiting for the sequel to Unconquerable Sun. Next April! Rating: 6 out of 10

16 November 2022

Antoine Beaupr : A ZFS migration

In my tubman setup, I started using ZFS on an old server I had lying around. The machine is really old though (2011!) and it "feels" pretty slow. I want to see how much of that is ZFS and how much is the machine. Synthetic benchmarks show that ZFS may be slower than mdadm in RAID-10 or RAID-6 configuration, so I want to confirm that on a live workload: my workstation. Plus, I want easy, regular, high performance backups (with send/receive snapshots) and there's no way I'm going to use BTRFS because I find it too confusing and unreliable. So off we go.

Installation Since this is a conversion (and not a new install), our procedure is slightly different than the official documentation but otherwise it's pretty much in the same spirit: we're going to use ZFS for everything, including the root filesystem. So, install the required packages, on the current system:
apt install --yes gdisk zfs-dkms zfs zfs-initramfs zfsutils-linux
We also tell DKMS that we need to rebuild the initrd when upgrading:
echo REMAKE_INITRD=yes > /etc/dkms/zfs.conf

Partitioning This is going to partition /dev/sdc with:
  • 1MB MBR / BIOS legacy boot
  • 512MB EFI boot
  • 1GB bpool, unencrypted pool for /boot
  • rest of the disk for zpool, the rest of the data
     sgdisk --zap-all /dev/sdc
     sgdisk -a1 -n1:24K:+1000K -t1:EF02 /dev/sdc
     sgdisk     -n2:1M:+512M   -t2:EF00 /dev/sdc
     sgdisk     -n3:0:+1G      -t3:BF01 /dev/sdc
     sgdisk     -n4:0:0        -t4:BF00 /dev/sdc
    
That will look something like this:
    root@curie:/home/anarcat# sgdisk -p /dev/sdc
    Disk /dev/sdc: 1953525168 sectors, 931.5 GiB
    Model: ESD-S1C         
    Sector size (logical/physical): 512/512 bytes
    Disk identifier (GUID): [REDACTED]
    Partition table holds up to 128 entries
    Main partition table begins at sector 2 and ends at sector 33
    First usable sector is 34, last usable sector is 1953525134
    Partitions will be aligned on 16-sector boundaries
    Total free space is 14 sectors (7.0 KiB)
    Number  Start (sector)    End (sector)  Size       Code  Name
       1              48            2047   1000.0 KiB  EF02  
       2            2048         1050623   512.0 MiB   EF00  
       3         1050624         3147775   1024.0 MiB  BF01  
       4         3147776      1953525134   930.0 GiB   BF00
Unfortunately, we can't be sure of the sector size here, because the USB controller is probably lying to us about it. Normally, this smartctl command should tell us the sector size as well:
root@curie:~# smartctl -i /dev/sdb -qnoserial
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-14-amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Black Mobile
Device Model:     WDC WD10JPLX-00MBPT0
Firmware Version: 01.01H01
User Capacity:    1 000 204 886 016 bytes [1,00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      2.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue May 17 13:33:04 2022 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Above is the example of the builtin HDD drive. But the SSD device enclosed in that USB controller doesn't support SMART commands, so we can't trust that it really has 512 bytes sectors. This matters because we need to tweak the ashift value correctly. We're going to go ahead the SSD drive has the common 4KB settings, which means ashift=12. Note here that we are not creating a separate partition for swap. Swap on ZFS volumes (AKA "swap on ZVOL") can trigger lockups and that issue is still not fixed upstream. Ubuntu recommends using a separate partition for swap instead. But since this is "just" a workstation, we're betting that we will not suffer from this problem, after hearing a report from another Debian developer running this setup on their workstation successfully. We do not recommend this setup though. In fact, if I were to redo this partition scheme, I would probably use LUKS encryption and setup a dedicated swap partition, as I had problems with ZFS encryption as well.

Creating pools ZFS pools are somewhat like "volume groups" if you are familiar with LVM, except they obviously also do things like RAID-10. (Even though LVM can technically also do RAID, people typically use mdadm instead.) In any case, the guide suggests creating two different pools here: one, in cleartext, for boot, and a separate, encrypted one, for the rest. Technically, the boot partition is required because the Grub bootloader only supports readonly ZFS pools, from what I understand. But I'm a little out of my depth here and just following the guide.

Boot pool creation This creates the boot pool in readonly mode with features that grub supports:
    zpool create \
        -o cachefile=/etc/zfs/zpool.cache \
        -o ashift=12 -d \
        -o feature@async_destroy=enabled \
        -o feature@bookmarks=enabled \
        -o feature@embedded_data=enabled \
        -o feature@empty_bpobj=enabled \
        -o feature@enabled_txg=enabled \
        -o feature@extensible_dataset=enabled \
        -o feature@filesystem_limits=enabled \
        -o feature@hole_birth=enabled \
        -o feature@large_blocks=enabled \
        -o feature@lz4_compress=enabled \
        -o feature@spacemap_histogram=enabled \
        -o feature@zpool_checkpoint=enabled \
        -O acltype=posixacl -O canmount=off \
        -O compression=lz4 \
        -O devices=off -O normalization=formD -O relatime=on -O xattr=sa \
        -O mountpoint=/boot -R /mnt \
        bpool /dev/sdc3
I haven't investigated all those settings and just trust the upstream guide on the above.

Main pool creation This is a more typical pool creation.
    zpool create \
        -o ashift=12 \
        -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
        -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
        -O compression=zstd \
        -O relatime=on \
        -O canmount=off \
        -O mountpoint=/ -R /mnt \
        rpool /dev/sdc4
Breaking this down:
  • -o ashift=12: mentioned above, 4k sector size
  • -O encryption=on -O keylocation=prompt -O keyformat=passphrase: encryption, prompt for a password, default algorithm is aes-256-gcm, explicit in the guide, made implicit here
  • -O acltype=posixacl -O xattr=sa: enable ACLs, with better performance (not enabled by default)
  • -O dnodesize=auto: related to extended attributes, less compatibility with other implementations
  • -O compression=zstd: enable zstd compression, can be disabled/enabled by dataset to with zfs set compression=off rpool/example
  • -O relatime=on: classic atime optimisation, another that could be used on a busy server is atime=off
  • -O canmount=off: do not make the pool mount automatically with mount -a?
  • -O mountpoint=/ -R /mnt: mount pool on / in the future, but /mnt for now
Those settings are all available in zfsprops(8). Other flags are defined in zpool-create(8). The reasoning behind them is also explained in the upstream guide and some also in [the Debian wiki][]. Those flags were actually not used:
  • -O normalization=formD: normalize file names on comparisons (not storage), implies utf8only=on, which is a bad idea (and effectively meant my first sync failed to copy some files, including this folder from a supysonic checkout). and this cannot be changed after the filesystem is created. bad, bad, bad.
[the Debian wiki]: https://wiki.debian.org/ZFS#Advanced_Topics

Side note about single-disk pools Also note that we're living dangerously here: single-disk ZFS pools are rumoured to be more dangerous than not running ZFS at all. The choice quote from this article is:
[...] any error can be detected, but cannot be corrected. This sounds like an acceptable compromise, but its actually not. The reason its not is that ZFS' metadata cannot be allowed to be corrupted. If it is it is likely the zpool will be impossible to mount (and will probably crash the system once the corruption is found). So a couple of bad sectors in the right place will mean that all data on the zpool will be lost. Not some, all. Also there's no ZFS recovery tools, so you cannot recover any data on the drives.
Compared with (say) ext4, where a single disk error can recovered, this is pretty bad. But we are ready to live with this with the idea that we'll have hourly offline snapshots that we can easily recover from. It's trade-off. Also, we're running this on a NVMe/M.2 drive which typically just blinks out of existence completely, and doesn't "bit rot" the way a HDD would. Also, the FreeBSD handbook quick start doesn't have any warnings about their first example, which is with a single disk. So I am reassured at least.

Creating mount points Next we create the actual filesystems, known as "datasets" which are the things that get mounted on mountpoint and hold the actual files.
  • this creates two containers, for ROOT and BOOT
     zfs create -o canmount=off -o mountpoint=none rpool/ROOT &&
     zfs create -o canmount=off -o mountpoint=none bpool/BOOT
    
    Note that it's unclear to me why those datasets are necessary, but they seem common practice, also used in this FreeBSD example. The OpenZFS guide mentions the Solaris upgrades and Ubuntu's zsys that use that container for upgrades and rollbacks. This blog post seems to explain a bit the layout behind the installer.
  • this creates the actual boot and root filesystems:
     zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian &&
     zfs mount rpool/ROOT/debian &&
     zfs create -o mountpoint=/boot bpool/BOOT/debian
    
    I guess the debian name here is because we could technically have multiple operating systems with the same underlying datasets.
  • then the main datasets:
     zfs create                                 rpool/home &&
     zfs create -o mountpoint=/root             rpool/home/root &&
     chmod 700 /mnt/root &&
     zfs create                                 rpool/var
    
  • exclude temporary files from snapshots:
     zfs create -o com.sun:auto-snapshot=false  rpool/var/cache &&
     zfs create -o com.sun:auto-snapshot=false  rpool/var/tmp &&
     chmod 1777 /mnt/var/tmp
    
  • and skip automatic snapshots in Docker:
     zfs create -o canmount=off                 rpool/var/lib &&
     zfs create -o com.sun:auto-snapshot=false  rpool/var/lib/docker
    
    Notice here a peculiarity: we must create rpool/var/lib to create rpool/var/lib/docker otherwise we get this error:
     cannot create 'rpool/var/lib/docker': parent does not exist
    
    ... and no, just creating /mnt/var/lib doesn't fix that problem. In fact, it makes things even more confusing because an existing directory shadows a mountpoint, which is the opposite of how things normally work. Also note that you will probably need to change storage driver in Docker, see the zfs-driver documentation for details but, basically, I did:
    echo '  "storage-driver": "zfs"  ' > /etc/docker/daemon.json
    
    Note that podman has the same problem (and similar solution):
    printf '[storage]\ndriver = "zfs"\n' > /etc/containers/storage.conf
    
  • make a tmpfs for /run:
     mkdir /mnt/run &&
     mount -t tmpfs tmpfs /mnt/run &&
     mkdir /mnt/run/lock
    
We don't create a /srv, as that's the HDD stuff. Also mount the EFI partition:
mkfs.fat -F 32 /dev/sdc2 &&
mount /dev/sdc2 /mnt/boot/efi/
At this point, everything should be mounted in /mnt. It should look like this:
root@curie:~# LANG=C df -h -t zfs -t vfat
Filesystem            Size  Used Avail Use% Mounted on
rpool/ROOT/debian     899G  384K  899G   1% /mnt
bpool/BOOT/debian     832M  123M  709M  15% /mnt/boot
rpool/home            899G  256K  899G   1% /mnt/home
rpool/home/root       899G  256K  899G   1% /mnt/root
rpool/var             899G  384K  899G   1% /mnt/var
rpool/var/cache       899G  256K  899G   1% /mnt/var/cache
rpool/var/tmp         899G  256K  899G   1% /mnt/var/tmp
rpool/var/lib/docker  899G  256K  899G   1% /mnt/var/lib/docker
/dev/sdc2             511M  4.0K  511M   1% /mnt/boot/efi
Now that we have everything setup and mounted, let's copy all files over.

Copying files This is a list of all the mounted filesystems
for fs in /boot/ /boot/efi/ / /home/; do
    echo "syncing $fs to /mnt$fs..." && 
    rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
done
You can check that the list is correct with:
mount -l -t ext4,btrfs,vfat   awk ' print $3 '
Note that we skip /srv as it's on a different disk. On the first run, we had:
root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
        echo "syncing $fs to /mnt$fs..." && 
        rsync -aSHAXx --info=progress2 $fs /mnt$fs
    done
syncing /boot/ to /mnt/boot/...
              0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
syncing /boot/efi/ to /mnt/boot/efi/...
     16,831,437 100%  184.14MB/s    0:00:00 (xfr#101, to-chk=0/110)
syncing / to /mnt/...
 28,019,293,280  94%   47.63MB/s    0:09:21 (xfr#703710, ir-chk=6748/839220)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
could not make way for new symlink: var/lib/docker
 34,081,267,990  98%   50.71MB/s    0:10:40 (xfr#736577, to-chk=0/867732)    
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
syncing /home/ to /mnt/home/...
rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
 24,456,268,098  98%   68.03MB/s    0:05:42 (xfr#159867, ir-chk=6875/172377) 
file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/B3AB0CDA9C4454B3C1197E5A22669DF8EE849D90"
199,762,528,125  93%   74.82MB/s    0:42:26 (xfr#1437846, ir-chk=1018/1983979)rsync: [generator] recv_generator: mkdir "/mnt/home/anarcat/dist/supysonic/tests/assets/\#346" failed: Invalid or incomplete multibyte or wide character (84)
*** Skipping any contents from this failed directory ***
315,384,723,978  96%   76.82MB/s    1:05:15 (xfr#2256473, to-chk=0/2993950)    
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
Note the failure to transfer that supysonic file? It turns out they had a weird filename in their source tree, since then removed, but still it showed how the utf8only feature might not be such a bad idea. At this point, the procedure was restarted all the way back to "Creating pools", after unmounting all ZFS filesystems (umount /mnt/run /mnt/boot/efi && umount -t zfs -a) and destroying the pool, which, surprisingly, doesn't require any confirmation (zpool destroy rpool). The second run was cleaner:
root@curie:~# for fs in /boot/ /boot/efi/ / /home/; do
        echo "syncing $fs to /mnt$fs..." && 
        rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
    done
syncing /boot/ to /mnt/boot/...
              0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/299)  
syncing /boot/efi/ to /mnt/boot/efi/...
              0   0%    0.00kB/s    0:00:00 (xfr#0, to-chk=0/110)  
syncing / to /mnt/...
 28,019,033,070  97%   42.03MB/s    0:10:35 (xfr#703671, ir-chk=1093/833515)rsync: [generator] delete_file: rmdir(var/lib/docker) failed: Device or resource busy (16)
could not make way for new symlink: var/lib/docker
 34,081,807,102  98%   44.84MB/s    0:12:04 (xfr#736580, to-chk=0/867723)    
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
syncing /home/ to /mnt/home/...
rsync: [sender] readlink_stat("/home/anarcat/.fuse") failed: Permission denied (13)
IO error encountered -- skipping file deletion
 24,043,086,450  96%   62.03MB/s    0:06:09 (xfr#151819, ir-chk=15117/172571)
file has vanished: "/home/anarcat/.cache/mozilla/firefox/s2hwvqbu.quantum/cache2/entries/4C1FDBFEA976FF924D062FB990B24B897A77B84B"
315,423,626,507  96%   67.09MB/s    1:14:43 (xfr#2256845, to-chk=0/2994364)    
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1333) [sender=3.2.3]
Also note the transfer speed: we seem capped at 76MB/s, or 608Mbit/s. This is not as fast as I was expecting: the USB connection seems to be at around 5Gbps:
anarcat@curie:~$ lsusb -tv   head -4
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    ID 1d6b:0003 Linux Foundation 3.0 root hub
     __ Port 1: Dev 4, If 0, Class=Mass Storage, Driver=uas, 5000M
        ID 0b05:1932 ASUSTek Computer, Inc.
So it shouldn't cap at that speed. It's possible the USB adapter is failing to give me the full speed though. It's not the M.2 SSD drive either, as that has a ~500MB/s bandwidth, acccording to its spec. At this point, we're about ready to do the final configuration. We drop to single user mode and do the rest of the procedure. That used to be shutdown now, but it seems like the systemd switch broke that, so now you can reboot into grub and pick the "recovery" option. Alternatively, you might try systemctl rescue, as I found out. I also wanted to copy the drive over to another new NVMe drive, but that failed: it looks like the USB controller I have doesn't work with older, non-NVME drives.

Boot configuration Now we need to enter the new system to rebuild the boot loader and initrd and so on. First, we bind mounts and chroot into the ZFS disk:
mount --rbind /dev  /mnt/dev &&
mount --rbind /proc /mnt/proc &&
mount --rbind /sys  /mnt/sys &&
chroot /mnt /bin/bash
Next we add an extra service that imports the bpool on boot, to make sure it survives a zpool.cache destruction:
cat > /etc/systemd/system/zfs-import-bpool.service <<EOF
[Unit]
DefaultDependencies=no
Before=zfs-import-scan.service
Before=zfs-import-cache.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/zpool import -N -o cachefile=none bpool
# Work-around to preserve zpool cache:
ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
[Install]
WantedBy=zfs-import.target
EOF
Enable the service:
systemctl enable zfs-import-bpool.service
I had to trim down /etc/fstab and /etc/crypttab to only contain references to the legacy filesystems (/srv is still BTRFS!). If we don't already have a tmpfs defined in /etc/fstab:
ln -s /usr/share/systemd/tmp.mount /etc/systemd/system/ &&
systemctl enable tmp.mount
Rebuild boot loader with support for ZFS, but also to workaround GRUB's missing zpool-features support:
grub-probe /boot   grep -q zfs &&
update-initramfs -c -k all &&
sed -i 's,GRUB_CMDLINE_LINUX.*,GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian",' /etc/default/grub &&
update-grub
For good measure, make sure the right disk is configured here, for example you might want to tag both drives in a RAID array:
dpkg-reconfigure grub-pc
Install grub to EFI while you're there:
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=debian --recheck --no-floppy
Filesystem mount ordering. The rationale here in the OpenZFS guide is a little strange, but I don't dare ignore that.
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/bpool
touch /etc/zfs/zfs-list.cache/rpool
zed -F &
Verify that zed updated the cache by making sure these are not empty:
cat /etc/zfs/zfs-list.cache/bpool
cat /etc/zfs/zfs-list.cache/rpool
Once the files have data, stop zed:
fg
Press Ctrl-C.
Fix the paths to eliminate /mnt:
sed -Ei "s /mnt/? / " /etc/zfs/zfs-list.cache/*
Snapshot initial install:
zfs snapshot bpool/BOOT/debian@install
zfs snapshot rpool/ROOT/debian@install
Exit chroot:
exit

Finalizing One last sync was done in rescue mode:
for fs in /boot/ /boot/efi/ / /home/; do
    echo "syncing $fs to /mnt$fs..." && 
    rsync -aSHAXx --info=progress2 --delete $fs /mnt$fs
done
Then we unmount all filesystems:
mount   grep -v zfs   tac   awk '/\/mnt/  print $3 '   xargs -i  umount -lf  
zpool export -a
Reboot, swap the drives, and boot in ZFS. Hurray!

Benchmarks This is a test that was ran in single-user mode using fio and the Ars Technica recommended tests, which are:
  • Single 4KiB random write process:
     fio --name=randwrite4k1x --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
    
  • 16 parallel 64KiB random write processes:
     fio --name=randwrite64k16x --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1
    
  • Single 1MiB random write process:
     fio --name=randwrite1m1x --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
    
Strangely, that's not exactly what the author, Jim Salter, did in his actual test bench used in the ZFS benchmarking article. The first thing is there's no read test at all, which is already pretty strange. But also it doesn't include stuff like dropping caches or repeating results. So here's my variation, which i called fio-ars-bench.sh for now. It just batches a bunch of fio tests, one by one, 60 seconds each. It should take about 12 minutes to run, as there are 3 pair of tests, read/write, with and without async. My bias, before building, running and analysing those results is that ZFS should outperform the traditional stack on writes, but possibly not on reads. It's also possible it outperforms it on both, because it's a newer drive. A new test might be possible with a new external USB drive as well, although I doubt I will find the time to do this.

Results All tests were done on WD blue SN550 drives, which claims to be able to push 2400MB/s read and 1750MB/s write. An extra drive was bought to move the LVM setup from a WDC WDS500G1B0B-00AS40 SSD, a WD blue M.2 2280 SSD that was at least 5 years old, spec'd at 560MB/s read, 530MB/s write. Benchmarks were done on the M.2 SSD drive but discarded so that the drive difference is not a factor in the test. In practice, I'm going to assume we'll never reach those numbers because we're not actually NVMe (this is an old workstation!) so the bottleneck isn't the disk itself. For our purposes, it might still give us useful results.

Rescue test, LUKS/LVM/ext4 Those tests were performed with everything shutdown, after either entering the system in rescue mode, or by reaching that target with:
systemctl rescue
The network might have been started before or after the test as well:
systemctl start systemd-networkd
So it should be fairly reliable as basically nothing else is running. Raw numbers, from the ?job-curie-lvm.log, converted to MiB/s and manually merged:
test read I/O read IOPS write I/O write IOPS
rand4k4g1x 39.27 10052 212.15 54310
rand4k4g1x--fsync=1 39.29 10057 2.73 699
rand64k256m16x 1297.00 20751 1068.57 17097
rand64k256m16x--fsync=1 1290.90 20654 353.82 5661
rand1m16g1x 315.15 315 563.77 563
rand1m16g1x--fsync=1 345.88 345 157.01 157
Peaks are at about 20k IOPS and ~1.3GiB/s read, 1GiB/s write in the 64KB blocks with 16 jobs. Slowest is the random 4k block sync write at an abysmal 3MB/s and 700 IOPS The 1MB read/write tests have lower IOPS, but that is expected.

Rescue test, ZFS This test was also performed in rescue mode. Raw numbers, from the ?job-curie-zfs.log, converted to MiB/s and manually merged:
test read I/O read IOPS write I/O write IOPS
rand4k4g1x 77.20 19763 27.13 6944
rand4k4g1x--fsync=1 76.16 19495 6.53 1673
rand64k256m16x 1882.40 30118 70.58 1129
rand64k256m16x--fsync=1 1865.13 29842 71.98 1151
rand1m16g1x 921.62 921 102.21 102
rand1m16g1x--fsync=1 908.37 908 64.30 64
Peaks are at 1.8GiB/s read, also in the 64k job like above, but much faster. The write is, as expected, much slower at 70MiB/s (compared to 1GiB/s!), but it should be noted the sync write doesn't degrade performance compared to async writes (although it's still below the LVM 300MB/s).

Conclusions Really, ZFS has trouble performing in all write conditions. The random 4k sync write test is the only place where ZFS outperforms LVM in writes, and barely (7MiB/s vs 3MiB/s). Everywhere else, writes are much slower, sometimes by an order of magnitude. And before some ZFS zealot jumps in talking about the SLOG or some other cache that could be added to improved performance, I'll remind you that those numbers are on a bare bones NVMe drive, pretty much as fast storage as you can find on this machine. Adding another NVMe drive as a cache probably will not improve write performance here. Still, those are very different results than the tests performed by Salter which shows ZFS beating traditional configurations in all categories but uncached 4k reads (not writes!). That said, those tests are very different from the tests I performed here, where I test writes on a single disk, not a RAID array, which might explain the discrepancy. Also, note that neither LVM or ZFS manage to reach the 2400MB/s read and 1750MB/s write performance specification. ZFS does manage to reach 82% of the read performance (1973MB/s) and LVM 64% of the write performance (1120MB/s). LVM hits 57% of the read performance and ZFS hits barely 6% of the write performance. Overall, I'm a bit disappointed in the ZFS write performance here, I must say. Maybe I need to tweak the record size or some other ZFS voodoo, but I'll note that I didn't have to do any such configuration on the other side to kick ZFS in the pants...

Real world experience This section document not synthetic backups, but actual real world workloads, comparing before and after I switched my workstation to ZFS.

Docker performance I had the feeling that running some git hook (which was firing a Docker container) was "slower" somehow. It seems that, at runtime, ZFS backends are significant slower than their overlayfs/ext4 equivalent:
May 16 14:42:52 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87\x2dinit-merged.mount: Succeeded.
May 16 14:42:52 curie systemd[5161]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87\x2dinit-merged.mount: Succeeded.
May 16 14:42:52 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
May 16 14:42:53 curie dockerd[1723]: time="2022-05-16T14:42:53.087219426-04:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10 pid=151170
May 16 14:42:53 curie systemd[1]: Started libcontainer container af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10.
May 16 14:42:54 curie systemd[1]: docker-af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10.scope: Succeeded.
May 16 14:42:54 curie dockerd[1723]: time="2022-05-16T14:42:54.047297800-04:00" level=info msg="shim disconnected" id=af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10
May 16 14:42:54 curie dockerd[998]: time="2022-05-16T14:42:54.051365015-04:00" level=info msg="ignoring event" container=af22586fba07014a4d10ab19da10cf280db7a43cad804d6c1e9f2682f12b5f10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 16 14:42:54 curie systemd[2444]: run-docker-netns-f5453c87c879.mount: Succeeded.
May 16 14:42:54 curie systemd[5161]: run-docker-netns-f5453c87c879.mount: Succeeded.
May 16 14:42:54 curie systemd[2444]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
May 16 14:42:54 curie systemd[5161]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
May 16 14:42:54 curie systemd[1]: run-docker-netns-f5453c87c879.mount: Succeeded.
May 16 14:42:54 curie systemd[1]: home-docker-overlay2-17e4d24228decc2d2d493efc401dbfb7ac29739da0e46775e122078d9daf3e87-merged.mount: Succeeded.
Translating this:
  • container setup: ~1 second
  • container runtime: ~1 second
  • container teardown: ~1 second
  • total runtime: 2-3 seconds
Obviously, those timestamps are not quite accurate enough to make precise measurements... After I switched to ZFS:
mai 30 15:31:39 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf\x2dinit.mount: Succeeded. 
mai 30 15:31:39 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf\x2dinit.mount: Succeeded. 
mai 30 15:31:40 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
mai 30 15:31:40 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
mai 30 15:31:41 curie dockerd[3199]: time="2022-05-30T15:31:41.551403693-04:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 pid=141080 
mai 30 15:31:41 curie systemd[1]: run-docker-runtime\x2drunc-moby-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142-runc.ZVcjvl.mount: Succeeded. 
mai 30 15:31:41 curie systemd[5287]: run-docker-runtime\x2drunc-moby-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142-runc.ZVcjvl.mount: Succeeded. 
mai 30 15:31:41 curie systemd[1]: Started libcontainer container 42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142. 
mai 30 15:31:45 curie systemd[1]: docker-42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142.scope: Succeeded. 
mai 30 15:31:45 curie dockerd[3199]: time="2022-05-30T15:31:45.883019128-04:00" level=info msg="shim disconnected" id=42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 
mai 30 15:31:45 curie dockerd[1726]: time="2022-05-30T15:31:45.883064491-04:00" level=info msg="ignoring event" container=42a1a1ed5912a7227148e997f442e7ab2e5cc3558aa3471548223c5888c9b142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 
mai 30 15:31:45 curie systemd[1]: run-docker-netns-e45f5cf5f465.mount: Succeeded. 
mai 30 15:31:45 curie systemd[5287]: run-docker-netns-e45f5cf5f465.mount: Succeeded. 
mai 30 15:31:45 curie systemd[1]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded. 
mai 30 15:31:45 curie systemd[5287]: var-lib-docker-zfs-graph-41ce08fb7a1d3a9c101694b82722f5621c0b4819bd1d9f070933fd1e00543cdf.mount: Succeeded.
That's double or triple the run time, from 2 seconds to 6 seconds. Most of the time is spent in run time, inside the container. Here's the breakdown:
  • container setup: ~2 seconds
  • container run: ~4 seconds
  • container teardown: ~1 second
  • total run time: about ~6-7 seconds
That's a two- to three-fold increase! Clearly something is going on here that I should tweak. It's possible that code path is less optimized in Docker. I also worry about podman, but apparently it also supports ZFS backends. Possibly it would perform better, but at this stage I wouldn't have a good comparison: maybe it would have performed better on non-ZFS as well...

Interactivity While doing the offsite backups (below), the system became somewhat "sluggish". I felt everything was slow, and I estimate it introduced ~50ms latency in any input device. Arguably, those are all USB and the external drive was connected through USB, but I suspect the ZFS drivers are not as well tuned with the scheduler as the regular filesystem drivers...

Recovery procedures For test purposes, I unmounted all systems during the procedure:
umount /mnt/boot/efi /mnt/boot/run
umount -a -t zfs
zpool export -a
And disconnected the drive, to see how I would recover this system from another Linux system in case of a total motherboard failure. To import an existing pool, plug the device, then import the pool with an alternate root, so it doesn't mount over your existing filesystems, then you mount the root filesystem and all the others:
zpool import -l -a -R /mnt &&
zfs mount rpool/ROOT/debian &&
zfs mount -a &&
mount /dev/sdc2 /mnt/boot/efi &&
mount -t tmpfs tmpfs /mnt/run &&
mkdir /mnt/run/lock

Offsite backup Part of the goal of using ZFS is to simplify and harden backups. I wanted to experiment with shorter recovery times specifically both point in time recovery objective and recovery time objective and faster incremental backups. This is, therefore, part of my backup services. This section documents how an external NVMe enclosure was setup in a pool to mirror the datasets from my workstation. The final setup should include syncoid copying datasets to the backup server regularly, but I haven't finished that configuration yet.

Partitioning The above partitioning procedure used sgdisk, but I couldn't figure out how to do this with sgdisk, so this uses sfdisk to dump the partition from the first disk to an external, identical drive:
sfdisk -d /dev/nvme0n1   sfdisk --no-reread /dev/sda --force

Pool creation This is similar to the main pool creation, except we tweaked a few bits after changing the upstream procedure:
zpool create \
        -o cachefile=/etc/zfs/zpool.cache \
        -o ashift=12 -d \
        -o feature@async_destroy=enabled \
        -o feature@bookmarks=enabled \
        -o feature@embedded_data=enabled \
        -o feature@empty_bpobj=enabled \
        -o feature@enabled_txg=enabled \
        -o feature@extensible_dataset=enabled \
        -o feature@filesystem_limits=enabled \
        -o feature@hole_birth=enabled \
        -o feature@large_blocks=enabled \
        -o feature@lz4_compress=enabled \
        -o feature@spacemap_histogram=enabled \
        -o feature@zpool_checkpoint=enabled \
        -O acltype=posixacl -O xattr=sa \
        -O compression=lz4 \
        -O devices=off \
        -O relatime=on \
        -O canmount=off \
        -O mountpoint=/boot -R /mnt \
        bpool-tubman /dev/sdb3
The change from the main boot pool are: Main pool creation is:
zpool create \
        -o ashift=12 \
        -O encryption=on -O keylocation=prompt -O keyformat=passphrase \
        -O acltype=posixacl -O xattr=sa -O dnodesize=auto \
        -O compression=zstd \
        -O relatime=on \
        -O canmount=off \
        -O mountpoint=/ -R /mnt \
        rpool-tubman /dev/sdb4

First sync I used syncoid to copy all pools over to the external device. syncoid is a thing that's part of the sanoid project which is specifically designed to sync snapshots between pool, typically over SSH links but it can also operate locally. The sanoid command had a --readonly argument to simulate changes, but syncoid didn't so I tried to fix that with an upstream PR. It seems it would be better to do this by hand, but this was much easier. The full first sync was:
root@curie:/home/anarcat# ./bin/syncoid -r  bpool bpool-tubman
CRITICAL ERROR: Target bpool-tubman exists but has no snapshots matching with bpool!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.
          NOTE: Target bpool-tubman dataset is < 64MB used - did you mistakenly run
                 zfs create bpool-tubman  on the target? ZFS initial
                replication must be to a NON EXISTENT DATASET, which will
                then be CREATED BY the initial replication process.
INFO: Sending oldest full snapshot bpool/BOOT@test (~ 42 KB) to new target filesystem:
44.2KiB 0:00:00 [4.19MiB/s] [========================================================================================================================] 103%            
INFO: Updating new target filesystem with incremental bpool/BOOT@test ... syncoid_curie_2022-05-30:12:50:39 (~ 4 KB):
2.13KiB 0:00:00 [ 114KiB/s] [===============================================================>                                                         ] 53%            
INFO: Sending oldest full snapshot bpool/BOOT/debian@install (~ 126.0 MB) to new target filesystem:
 126MiB 0:00:00 [ 308MiB/s] [=======================================================================================================================>] 100%            
INFO: Updating new target filesystem with incremental bpool/BOOT/debian@install ... syncoid_curie_2022-05-30:12:50:39 (~ 113.4 MB):
 113MiB 0:00:00 [ 315MiB/s] [=======================================================================================================================>] 100%
root@curie:/home/anarcat# ./bin/syncoid -r  rpool rpool-tubman
CRITICAL ERROR: Target rpool-tubman exists but has no snapshots matching with rpool!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.
          NOTE: Target rpool-tubman dataset is < 64MB used - did you mistakenly run
                 zfs create rpool-tubman  on the target? ZFS initial
                replication must be to a NON EXISTENT DATASET, which will
                then be CREATED BY the initial replication process.
INFO: Sending oldest full snapshot rpool/ROOT@syncoid_curie_2022-05-30:12:50:51 (~ 69 KB) to new target filesystem:
44.2KiB 0:00:00 [2.44MiB/s] [===========================================================================>                                             ] 63%            
INFO: Sending oldest full snapshot rpool/ROOT/debian@install (~ 25.9 GB) to new target filesystem:
25.9GiB 0:03:33 [ 124MiB/s] [=======================================================================================================================>] 100%            
INFO: Updating new target filesystem with incremental rpool/ROOT/debian@install ... syncoid_curie_2022-05-30:12:50:52 (~ 3.9 GB):
3.92GiB 0:00:33 [ 119MiB/s] [======================================================================================================================>  ] 99%            
INFO: Sending oldest full snapshot rpool/home@syncoid_curie_2022-05-30:12:55:04 (~ 276.8 GB) to new target filesystem:
 277GiB 0:27:13 [ 174MiB/s] [=======================================================================================================================>] 100%            
INFO: Sending oldest full snapshot rpool/home/root@syncoid_curie_2022-05-30:13:22:19 (~ 2.2 GB) to new target filesystem:
2.22GiB 0:00:25 [90.2MiB/s] [=======================================================================================================================>] 100%            
INFO: Sending oldest full snapshot rpool/var@syncoid_curie_2022-05-30:13:22:47 (~ 5.6 GB) to new target filesystem:
5.56GiB 0:00:32 [ 176MiB/s] [=======================================================================================================================>] 100%            
INFO: Sending oldest full snapshot rpool/var/cache@syncoid_curie_2022-05-30:13:23:22 (~ 627.3 MB) to new target filesystem:
 627MiB 0:00:03 [ 169MiB/s] [=======================================================================================================================>] 100%            
INFO: Sending oldest full snapshot rpool/var/lib@syncoid_curie_2022-05-30:13:23:28 (~ 69 KB) to new target filesystem:
44.2KiB 0:00:00 [1.40MiB/s] [===========================================================================>                                             ] 63%            
INFO: Sending oldest full snapshot rpool/var/lib/docker@syncoid_curie_2022-05-30:13:23:28 (~ 442.6 MB) to new target filesystem:
 443MiB 0:00:04 [ 103MiB/s] [=======================================================================================================================>] 100%            
INFO: Sending oldest full snapshot rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254 (~ 6.3 MB) to new target filesystem:
6.49MiB 0:00:00 [12.9MiB/s] [========================================================================================================================] 102%            
INFO: Updating new target filesystem with incremental rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254 ... syncoid_curie_2022-05-30:13:23:34 (~ 4 KB):
1.52KiB 0:00:00 [27.6KiB/s] [============================================>                                                                            ] 38%            
INFO: Sending oldest full snapshot rpool/var/lib/flatpak@syncoid_curie_2022-05-30:13:23:36 (~ 2.0 GB) to new target filesystem:
2.00GiB 0:00:17 [ 115MiB/s] [=======================================================================================================================>] 100%            
INFO: Sending oldest full snapshot rpool/var/tmp@syncoid_curie_2022-05-30:13:23:55 (~ 57.0 MB) to new target filesystem:
61.8MiB 0:00:01 [45.0MiB/s] [========================================================================================================================] 108%            
INFO: Clone is recreated on target rpool-tubman/var/lib/docker/ed71ddd563a779ba6fb37b3b1d0cc2c11eca9b594e77b4b234867ebcb162b205 based on rpool/var/lib/docker/05c0de7fabbea60500eaa495d0d82038249f6faa63b12914737c4d71520e62c5@266253254
INFO: Sending oldest full snapshot rpool/var/lib/docker/ed71ddd563a779ba6fb37b3b1d0cc2c11eca9b594e77b4b234867ebcb162b205@syncoid_curie_2022-05-30:13:23:58 (~ 218.6 MB) to new target filesystem:
 219MiB 0:00:01 [ 151MiB/s] [=======================================================================================================================>] 100%
Funny how the CRITICAL ERROR doesn't actually stop syncoid and it just carries on merrily doing when it's telling you it's "cowardly refusing to destroy your existing target"... Maybe that's because my pull request broke something though... During the transfer, the computer was very sluggish: everything feels like it has ~30-50ms latency extra:
anarcat@curie:sanoid$ LANG=C top -b  -n 1   head -20
top - 13:07:05 up 6 days,  4:01,  1 user,  load average: 16.13, 16.55, 11.83
Tasks: 606 total,   6 running, 598 sleeping,   0 stopped,   2 zombie
%Cpu(s): 18.8 us, 72.5 sy,  1.2 ni,  5.0 id,  1.2 wa,  0.0 hi,  1.2 si,  0.0 st
MiB Mem :  15898.4 total,   1387.6 free,  13170.0 used,   1340.8 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   1319.8 avail Mem 
    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
     70 root      20   0       0      0      0 S  83.3   0.0   6:12.67 kswapd0
4024878 root      20   0  282644  96432  10288 S  44.4   0.6   0:11.43 puppet
3896136 root      20   0   35328  16528     48 S  22.2   0.1   2:08.04 mbuffer
3896135 root      20   0   10328    776    168 R  16.7   0.0   1:22.93 zfs
3896138 root      20   0   10588    788    156 R  16.7   0.0   1:49.30 zfs
    350 root       0 -20       0      0      0 R  11.1   0.0   1:03.53 z_rd_int
    351 root       0 -20       0      0      0 S  11.1   0.0   1:04.15 z_rd_int
3896137 root      20   0    4384    352    244 R  11.1   0.0   0:44.73 pv
4034094 anarcat   30  10   20028  13960   2428 S  11.1   0.1   0:00.70 mbsync
4036539 anarcat   20   0    9604   3464   2408 R  11.1   0.0   0:00.04 top
    352 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
    353 root       0 -20       0      0      0 S   5.6   0.0   1:03.64 z_rd_int
    354 root       0 -20       0      0      0 S   5.6   0.0   1:04.01 z_rd_int
I wonder how much of that is due to syncoid, particularly because I often saw mbuffer and pv in there which are not strictly necessary to do those kind of operations, as far as I understand. Once that's done, export the pools to disconnect the drive:
zpool export bpool-tubman
zpool export rpool-tubman

Raw disk benchmark Copied the 512GB SSD/M.2 device to another 1024GB NVMe/M.2 device:
anarcat@curie:~$ sudo dd if=/dev/sdb of=/dev/sdc bs=4M status=progress conv=fdatasync
499944259584 octets (500 GB, 466 GiB) copi s, 1713 s, 292 MB/s
119235+1 enregistrements lus
119235+1 enregistrements  crits
500107862016 octets (500 GB, 466 GiB) copi s, 1719,93 s, 291 MB/s
... while both over USB, whoohoo 300MB/s!

Monitoring ZFS should be monitoring your pools regularly. Normally, the [[!debman zed]] daemon monitors all ZFS events. It is the thing that will report when a scrub failed, for example. See this configuration guide. Scrubs should be regularly scheduled to ensure consistency of the pool. This can be done in newer zfsutils-linux versions (bullseye-backports or bookworm) with one of those, depending on the desired frequency:
systemctl enable zfs-scrub-weekly@rpool.timer --now
systemctl enable zfs-scrub-monthly@rpool.timer --now
When the scrub runs, if it finds anything it will send an event which will get picked up by the zed daemon which will then send a notification, see below for an example. TODO: deploy on curie, if possible (probably not because no RAID) TODO: this should be in Puppet

Scrub warning example So what happens when problems are found? Here's an example of how I dealt with an error I received. After setting up another server (tubman) with ZFS, I eventually ended up getting a warning from the ZFS toolchain.
Date: Sun, 09 Oct 2022 00:58:08 -0400
From: root <root@anarc.at>
To: root@anarc.at
Subject: ZFS scrub_finish event for rpool on tubman
ZFS has finished a scrub:
   eid: 39536
 class: scrub_finish
  host: tubman
  time: 2022-10-09 00:58:07-0400
  pool: rpool
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 00:33:57 with 0 errors on Sun Oct  9 00:58:07 2022
config:
        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb4    ONLINE       0     1     0
            sdc4    ONLINE       0     0     0
        cache
          sda3      ONLINE       0     0     0
errors: No known data errors
This, in itself, is a little worrisome. But it helpfully links to this more detailed documentation (and props up there: the link still works) which explains this is a "minor" problem (something that could be included in the report). In this case, this happened on a server setup on 2021-04-28, but the disks and server hardware are much older. The server itself (marcos v1) was built around 2011, over 10 years ago now. The hard drive in question is:
root@tubman:~# smartctl -i -qnoserial /dev/sdb
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-15-amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family:     Seagate BarraCuda 3.5
Device Model:     ST4000DM004-2CV104
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Tue Oct 11 11:02:32 2022 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
Some more SMART stats:
root@tubman:~# smartctl -a -qnoserial /dev/sdb   grep -e  Head_Flying_Hours -e Power_On_Hours -e Total_LBA -e 'Sector Sizes'
Sector Sizes:     512 bytes logical, 4096 bytes physical
  9 Power_On_Hours          0x0032   086   086   000    Old_age   Always       -       12464 (206 202 0)
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       10966h+55m+23.757s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       21107792664
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       3201579750
That's over a year of power on, which shouldn't be so bad. It has written about 10TB of data (21107792664 LBAs * 512 byte/LBA), which is about two full writes. According to its specification, this device is supposed to support 55 TB/year of writes, so we're far below spec. Note that are still far from the "non-recoverable read error per bits" spec (1 per 10E15), as we've basically read 13E12 bits (3201579750 LBAs * 512 byte/LBA = 13E12 bits). It's likely this disk was made in 2018, so it is in its fourth year. Interestingly, /dev/sdc is also a Seagate drive, but of a different series:
root@tubman:~# smartctl -qnoserial  -i /dev/sdb
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-15-amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family:     Seagate BarraCuda 3.5
Device Model:     ST4000DM004-2CV104
Firmware Version: 0001
User Capacity:    4,000,787,030,016 bytes [4.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Form Factor:      3.5 inches
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Tue Oct 11 11:21:35 2022 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
It has seen much more reads than the other disk which is also interesting:
root@tubman:~# smartctl -a -qnoserial /dev/sdc   grep -e  Head_Flying_Hours -e Power_On_Hours -e Total_LBA -e 'Sector Sizes'
Sector Sizes:     512 bytes logical, 4096 bytes physical
  9 Power_On_Hours          0x0032   059   059   000    Old_age   Always       -       36240
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       33994h+10m+52.118s
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       30730174438
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       51894566538
That's 4 years of Head_Flying_Hours, and over 4 years (4 years and 48 days) of Power_On_Hours. The copyright date on that drive's specs goes back to 2016, so it's a much older drive. SMART self-test succeeded.

Remaining issues
  • TODO: move send/receive backups to offsite host, see also zfs for alternatives to syncoid/sanoid there
  • TODO: setup backup cron job (or timer?)
  • TODO: swap still not setup on curie, see zfs
  • TODO: document this somewhere: bpool and rpool are both pools and datasets. that's pretty confusing, but also very useful because it allows for pool-wide recursive snapshots, which are used for the backup system

fio improvements I really want to improve my experience with fio. Right now, I'm just cargo-culting stuff from other folks and I don't really like it. stressant is a good example of my struggles, in the sense that it doesn't really work that well for disk tests. I would love to have just a single .fio job file that lists multiple jobs to run serially. For example, this file describes the above workload pretty well:
[global]
# cargo-culting Salter
fallocate=none
ioengine=posixaio
runtime=60
time_based=1
end_fsync=1
stonewall=1
group_reporting=1
# no need to drop caches, done by default
# invalidate=1
# Single 4KiB random read/write process
[randread-4k-4g-1x]
rw=randread
bs=4k
size=4g
numjobs=1
iodepth=1
[randwrite-4k-4g-1x]
rw=randwrite
bs=4k
size=4g
numjobs=1
iodepth=1
# 16 parallel 64KiB random read/write processes:
[randread-64k-256m-16x]
rw=randread
bs=64k
size=256m
numjobs=16
iodepth=16
[randwrite-64k-256m-16x]
rw=randwrite
bs=64k
size=256m
numjobs=16
iodepth=16
# Single 1MiB random read/write process
[randread-1m-16g-1x]
rw=randread
bs=1m
size=16g
numjobs=1
iodepth=1
[randwrite-1m-16g-1x]
rw=randwrite
bs=1m
size=16g
numjobs=1
iodepth=1
... except the jobs are actually started in parallel, even though they are stonewall'd, as far as I can tell by the reports. I sent a mail to the fio mailing list for clarification. It looks like the jobs are started in parallel, but actual (correctly) run serially. It seems like this might just be a matter of reporting the right timestamps in the end, although it does feel like starting all the processes (even if not doing any work yet) could skew the results.

Hangs during procedure During the procedure, it happened a few times where any ZFS command would completely hang. It seems that using an external USB drive to sync stuff didn't work so well: sometimes it would reconnect under a different device (from sdc to sdd, for example), and this would greatly confuse ZFS. Here, for example, is sdd reappearing out of the blue:
May 19 11:22:53 curie kernel: [  699.820301] scsi host4: uas
May 19 11:22:53 curie kernel: [  699.820544] usb 2-1: authorized to connect
May 19 11:22:53 curie kernel: [  699.922433] scsi 4:0:0:0: Direct-Access     ROG      ESD-S1C          0    PQ: 0 ANSI: 6
May 19 11:22:53 curie kernel: [  699.923235] sd 4:0:0:0: Attached scsi generic sg2 type 0
May 19 11:22:53 curie kernel: [  699.923676] sd 4:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
May 19 11:22:53 curie kernel: [  699.923788] sd 4:0:0:0: [sdd] Write Protect is off
May 19 11:22:53 curie kernel: [  699.923949] sd 4:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
May 19 11:22:53 curie kernel: [  699.924149] sd 4:0:0:0: [sdd] Optimal transfer size 33553920 bytes
May 19 11:22:53 curie kernel: [  699.961602]  sdd: sdd1 sdd2 sdd3 sdd4
May 19 11:22:53 curie kernel: [  699.996083] sd 4:0:0:0: [sdd] Attached SCSI disk
Next time I run a ZFS command (say zpool list), the command completely hangs (D state) and this comes up in the logs:
May 19 11:34:21 curie kernel: [ 1387.914843] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=71344128 size=4096 flags=184880
May 19 11:34:21 curie kernel: [ 1387.914859] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=205565952 size=4096 flags=184880
May 19 11:34:21 curie kernel: [ 1387.914874] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272789504 size=4096 flags=184880
May 19 11:34:21 curie kernel: [ 1387.914906] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=270336 size=8192 flags=b08c1
May 19 11:34:21 curie kernel: [ 1387.914932] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073225728 size=8192 flags=b08c1
May 19 11:34:21 curie kernel: [ 1387.914948] zio pool=bpool vdev=/dev/sdc3 error=5 type=1 offset=1073487872 size=8192 flags=b08c1
May 19 11:34:21 curie kernel: [ 1387.915165] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=272793600 size=4096 flags=184880
May 19 11:34:21 curie kernel: [ 1387.915183] zio pool=bpool vdev=/dev/sdc3 error=5 type=2 offset=339853312 size=4096 flags=184880
May 19 11:34:21 curie kernel: [ 1387.915648] WARNING: Pool 'bpool' has encountered an uncorrectable I/O failure and has been suspended.
May 19 11:34:21 curie kernel: [ 1387.915648] 
May 19 11:37:25 curie kernel: [ 1571.558614] task:txg_sync        state:D stack:    0 pid:  997 ppid:     2 flags:0x00004000
May 19 11:37:25 curie kernel: [ 1571.558623] Call Trace:
May 19 11:37:25 curie kernel: [ 1571.558640]  __schedule+0x282/0x870
May 19 11:37:25 curie kernel: [ 1571.558650]  schedule+0x46/0xb0
May 19 11:37:25 curie kernel: [ 1571.558670]  schedule_timeout+0x8b/0x140
May 19 11:37:25 curie kernel: [ 1571.558675]  ? __next_timer_interrupt+0x110/0x110
May 19 11:37:25 curie kernel: [ 1571.558678]  io_schedule_timeout+0x4c/0x80
May 19 11:37:25 curie kernel: [ 1571.558689]  __cv_timedwait_common+0x12b/0x160 [spl]
May 19 11:37:25 curie kernel: [ 1571.558694]  ? add_wait_queue_exclusive+0x70/0x70
May 19 11:37:25 curie kernel: [ 1571.558702]  __cv_timedwait_io+0x15/0x20 [spl]
May 19 11:37:25 curie kernel: [ 1571.558816]  zio_wait+0x129/0x2b0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.558929]  dsl_pool_sync+0x461/0x4f0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559032]  spa_sync+0x575/0xfa0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559138]  ? spa_txg_history_init_io+0x101/0x110 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559245]  txg_sync_thread+0x2e0/0x4a0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559354]  ? txg_fini+0x240/0x240 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559366]  thread_generic_wrapper+0x6f/0x80 [spl]
May 19 11:37:25 curie kernel: [ 1571.559376]  ? __thread_exit+0x20/0x20 [spl]
May 19 11:37:25 curie kernel: [ 1571.559379]  kthread+0x11b/0x140
May 19 11:37:25 curie kernel: [ 1571.559382]  ? __kthread_bind_mask+0x60/0x60
May 19 11:37:25 curie kernel: [ 1571.559386]  ret_from_fork+0x22/0x30
May 19 11:37:25 curie kernel: [ 1571.559401] task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
May 19 11:37:25 curie kernel: [ 1571.559404] Call Trace:
May 19 11:37:25 curie kernel: [ 1571.559409]  __schedule+0x282/0x870
May 19 11:37:25 curie kernel: [ 1571.559412]  ? __kmalloc_node+0x141/0x2b0
May 19 11:37:25 curie kernel: [ 1571.559417]  schedule+0x46/0xb0
May 19 11:37:25 curie kernel: [ 1571.559420]  schedule_preempt_disabled+0xa/0x10
May 19 11:37:25 curie kernel: [ 1571.559424]  __mutex_lock.constprop.0+0x133/0x460
May 19 11:37:25 curie kernel: [ 1571.559435]  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
May 19 11:37:25 curie kernel: [ 1571.559537]  spa_all_configs+0x41/0x120 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559644]  zfs_ioc_pool_configs+0x17/0x70 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559752]  zfsdev_ioctl_common+0x697/0x870 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559758]  ? _copy_from_user+0x28/0x60
May 19 11:37:25 curie kernel: [ 1571.559860]  zfsdev_ioctl+0x53/0xe0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.559866]  __x64_sys_ioctl+0x83/0xb0
May 19 11:37:25 curie kernel: [ 1571.559869]  do_syscall_64+0x33/0x80
May 19 11:37:25 curie kernel: [ 1571.559873]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
May 19 11:37:25 curie kernel: [ 1571.559876] RIP: 0033:0x7fcf0ef32cc7
May 19 11:37:25 curie kernel: [ 1571.559878] RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
May 19 11:37:25 curie kernel: [ 1571.559881] RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
May 19 11:37:25 curie kernel: [ 1571.559883] RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
May 19 11:37:25 curie kernel: [ 1571.559885] RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
May 19 11:37:25 curie kernel: [ 1571.559886] R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
May 19 11:37:25 curie kernel: [ 1571.559888] R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
May 19 11:37:25 curie kernel: [ 1571.559980] task:zpool           state:D stack:    0 pid:11815 ppid:  3816 flags:0x00004000
May 19 11:37:25 curie kernel: [ 1571.559983] Call Trace:
May 19 11:37:25 curie kernel: [ 1571.559988]  __schedule+0x282/0x870
May 19 11:37:25 curie kernel: [ 1571.559992]  schedule+0x46/0xb0
May 19 11:37:25 curie kernel: [ 1571.559995]  io_schedule+0x42/0x70
May 19 11:37:25 curie kernel: [ 1571.560004]  cv_wait_common+0xac/0x130 [spl]
May 19 11:37:25 curie kernel: [ 1571.560008]  ? add_wait_queue_exclusive+0x70/0x70
May 19 11:37:25 curie kernel: [ 1571.560118]  txg_wait_synced_impl+0xc9/0x110 [zfs]
May 19 11:37:25 curie kernel: [ 1571.560223]  txg_wait_synced+0xc/0x40 [zfs]
May 19 11:37:25 curie kernel: [ 1571.560325]  spa_export_common+0x4cd/0x590 [zfs]
May 19 11:37:25 curie kernel: [ 1571.560430]  ? zfs_log_history+0x9c/0xf0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.560537]  zfsdev_ioctl_common+0x697/0x870 [zfs]
May 19 11:37:25 curie kernel: [ 1571.560543]  ? _copy_from_user+0x28/0x60
May 19 11:37:25 curie kernel: [ 1571.560644]  zfsdev_ioctl+0x53/0xe0 [zfs]
May 19 11:37:25 curie kernel: [ 1571.560649]  __x64_sys_ioctl+0x83/0xb0
May 19 11:37:25 curie kernel: [ 1571.560653]  do_syscall_64+0x33/0x80
May 19 11:37:25 curie kernel: [ 1571.560656]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
May 19 11:37:25 curie kernel: [ 1571.560659] RIP: 0033:0x7fdc23be2cc7
May 19 11:37:25 curie kernel: [ 1571.560661] RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
May 19 11:37:25 curie kernel: [ 1571.560664] RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
May 19 11:37:25 curie kernel: [ 1571.560666] RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
May 19 11:37:25 curie kernel: [ 1571.560667] RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
May 19 11:37:25 curie kernel: [ 1571.560669] R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
May 19 11:37:25 curie kernel: [ 1571.560671] R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
Here's another example, where you see the USB controller bleeping out and back into existence:
mai 19 11:38:39 curie kernel: usb 2-1: USB disconnect, device number 2
mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronizing SCSI cache
mai 19 11:38:39 curie kernel: sd 4:0:0:0: [sdd] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
mai 19 11:39:25 curie kernel: INFO: task zed:1564 blocked for more than 241 seconds.
mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mai 19 11:39:25 curie kernel: task:zed             state:D stack:    0 pid: 1564 ppid:     1 flags:0x00000000
mai 19 11:39:25 curie kernel: Call Trace:
mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
mai 19 11:39:25 curie kernel:  ? __kmalloc_node+0x141/0x2b0
mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
mai 19 11:39:25 curie kernel:  schedule_preempt_disabled+0xa/0x10
mai 19 11:39:25 curie kernel:  __mutex_lock.constprop.0+0x133/0x460
mai 19 11:39:25 curie kernel:  ? nvlist_xalloc.part.0+0x68/0xc0 [znvpair]
mai 19 11:39:25 curie kernel:  spa_all_configs+0x41/0x120 [zfs]
mai 19 11:39:25 curie kernel:  zfs_ioc_pool_configs+0x17/0x70 [zfs]
mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
mai 19 11:39:25 curie kernel: RIP: 0033:0x7fcf0ef32cc7
mai 19 11:39:25 curie kernel: RSP: 002b:00007fcf0e181618 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055b212f972a0 RCX: 00007fcf0ef32cc7
mai 19 11:39:25 curie kernel: RDX: 00007fcf0e181640 RSI: 0000000000005a04 RDI: 000000000000000b
mai 19 11:39:25 curie kernel: RBP: 00007fcf0e184c30 R08: 00007fcf08016810 R09: 00007fcf08000080
mai 19 11:39:25 curie kernel: R10: 0000000000080000 R11: 0000000000000246 R12: 000055b212f972a0
mai 19 11:39:25 curie kernel: R13: 0000000000000000 R14: 00007fcf0e181640 R15: 0000000000000000
mai 19 11:39:25 curie kernel: INFO: task zpool:11815 blocked for more than 241 seconds.
mai 19 11:39:25 curie kernel:       Tainted: P          IOE     5.10.0-14-amd64 #1 Debian 5.10.113-1
mai 19 11:39:25 curie kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mai 19 11:39:25 curie kernel: task:zpool           state:D stack:    0 pid:11815 ppid:  2621 flags:0x00004004
mai 19 11:39:25 curie kernel: Call Trace:
mai 19 11:39:25 curie kernel:  __schedule+0x282/0x870
mai 19 11:39:25 curie kernel:  schedule+0x46/0xb0
mai 19 11:39:25 curie kernel:  io_schedule+0x42/0x70
mai 19 11:39:25 curie kernel:  cv_wait_common+0xac/0x130 [spl]
mai 19 11:39:25 curie kernel:  ? add_wait_queue_exclusive+0x70/0x70
mai 19 11:39:25 curie kernel:  txg_wait_synced_impl+0xc9/0x110 [zfs]
mai 19 11:39:25 curie kernel:  txg_wait_synced+0xc/0x40 [zfs]
mai 19 11:39:25 curie kernel:  spa_export_common+0x4cd/0x590 [zfs]
mai 19 11:39:25 curie kernel:  ? zfs_log_history+0x9c/0xf0 [zfs]
mai 19 11:39:25 curie kernel:  zfsdev_ioctl_common+0x697/0x870 [zfs]
mai 19 11:39:25 curie kernel:  ? _copy_from_user+0x28/0x60
mai 19 11:39:25 curie kernel:  zfsdev_ioctl+0x53/0xe0 [zfs]
mai 19 11:39:25 curie kernel:  __x64_sys_ioctl+0x83/0xb0
mai 19 11:39:25 curie kernel:  do_syscall_64+0x33/0x80
mai 19 11:39:25 curie kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
mai 19 11:39:25 curie kernel: RIP: 0033:0x7fdc23be2cc7
mai 19 11:39:25 curie kernel: RSP: 002b:00007ffc8c792478 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
mai 19 11:39:25 curie kernel: RAX: ffffffffffffffda RBX: 000055942ca49e20 RCX: 00007fdc23be2cc7
mai 19 11:39:25 curie kernel: RDX: 00007ffc8c792490 RSI: 0000000000005a03 RDI: 0000000000000003
mai 19 11:39:25 curie kernel: RBP: 00007ffc8c795e80 R08: 00000000ffffffff R09: 00007ffc8c792310
mai 19 11:39:25 curie kernel: R10: 000055942ca49e30 R11: 0000000000000246 R12: 00007ffc8c792490
mai 19 11:39:25 curie kernel: R13: 000055942ca49e30 R14: 000055942aed2c20 R15: 00007ffc8c795a40
I understand those are rather extreme conditions: I would fully expect the pool to stop working if the underlying drives disappear. What doesn't seem acceptable is that a command would completely hang like this.

References See the zfs documentation for more information about ZFS, and tubman for another installation and migration procedure.

9 November 2022

Debian Brasil: Brasileiros(as) Mantenedores(as) e Desenvolvedores(as) Debian a partir de julho de 2015

Desde de setembro de 2015, o time de publicidade do Projeto Debian passou a publicar a cada dois meses listas com os nomes dos(as) novos(as) Desenvolvedores(as) Debian (DD - do ingl s Debian Developer) e Mantenedores(as) Debian (DM - do ingl s Debian Maintainer). Estamos aproveitando estas listas para publicar abaixo os nomes dos(as) brasileiros(as) que se tornaram Desenvolvedores(as) e Mantenedores(as) Debian a partir de julho de 2015. Desenvolvedores(as) Debian / Debian Developers / DDs: Marcos Talau Fabio Augusto De Muzio Tobich Gabriel F. T. Gomes Thiago Andrade Marques M rcio de Souza Oliveira Paulo Henrique de Lima Santana Samuel Henrique S rgio Durigan J nior Daniel Lenharo de Souza Giovani Augusto Ferreira Adriano Rafael Gomes Breno Leit o Lucas Kanashiro Herbert Parentes Fortes Neto Mantenedores(as) Debian / Debian Maintainers / DMs: Guilherme de Paula Xavier Segundo David da Silva Polverari Paulo Roberto Alves de Oliveira Sergio Almeida Cipriano Junior Francisco Vilmar Cardoso Ruviaro William Grzybowski Tiago Ilieve
Observa es:
  1. Esta lista ser atualizada quando o time de publicidade do Debian publicar novas listas com DMs e DDs e tiver brasileiros.
  2. Para ver a lista completa de Mantenedores(as) e Desenvolvedores(as) Debian, inclusive outros(as) brasileiros(as) antes de julho de 2015 acesse: https://nm.debian.org/public/people

Debian Brasil: Brasileiros(as) Mantenedores(as) e Desenvolvedores(as) Debian a partir de julho de 2015

Desde de setembro de 2015, o time de publicidade do Projeto Debian passou a publicar a cada dois meses listas com os nomes dos(as) novos(as) Desenvolvedores(as) Debian (DD - do ingl s Debian Developer) e Mantenedores(as) Debian (DM - do ingl s Debian Maintainer). Estamos aproveitando estas listas para publicar abaixo os nomes dos(as) brasileiros(as) que se tornaram Desenvolvedores(as) e Mantenedores(as) Debian a partir de julho de 2015. Desenvolvedores(as) Debian / Debian Developers / DDs: Marcos Talau Fabio Augusto De Muzio Tobich Gabriel F. T. Gomes Thiago Andrade Marques M rcio de Souza Oliveira Paulo Henrique de Lima Santana Samuel Henrique S rgio Durigan J nior Daniel Lenharo de Souza Giovani Augusto Ferreira Adriano Rafael Gomes Breno Leit o Lucas Kanashiro Herbert Parentes Fortes Neto Mantenedores(as) Debian / Debian Maintainers / DMs: Guilherme de Paula Xavier Segundo David da Silva Polverari Paulo Roberto Alves de Oliveira Sergio Almeida Cipriano Junior Francisco Vilmar Cardoso Ruviaro William Grzybowski Tiago Ilieve
Observa es:
  1. Esta lista ser atualizada quando o time de publicidade do Debian publicar novas listas com DMs e DDs e tiver brasileiros.
  2. Para ver a lista completa de Mantenedores(as) e Desenvolvedores(as) Debian, inclusive outros(as) brasileiros(as) antes de julho de 2015 acesse: https://nm.debian.org/public/people

4 October 2022

Russ Allbery: Review: The Dragon Never Sleeps

Review: The Dragon Never Sleeps, by Glen Cook
Publisher: Night Shade Books
Copyright: 1988
Printing: 2008
ISBN: 1-59780-099-6
Format: MOBI
Pages: 449
Canon Space is run, in a way, by the noble mercantile houses, who spread their cities, colonies, and mines through the mysterious Web that allows faster-than-light interstellar travel. The true rulers of Canon Space, though, are the Guardships: enormous, undefeatable starships run by crews who have become effectively immortal by repeated uploading and reincarnation. Or, in the case of the Deified, without reincarnation, observing, ordering, advising, and meddling from an entirely virtual existence. The Guardships have enforced the status quo for four thousand years. House Tregesser thinks they have the means to break the stranglehold of the Guardships. They have contact with Outsiders from beyond Canon Space who can provide advanced technology. They have their own cloning technology, which they use to create backup copies of their elites. And they have Lupo Provik, a quietly brilliant schemer who has devoted his life to destroying Guardships. This book was so bad. A more sensible person than I would have given up after the first hundred pages, but I was too stubborn. The stubbornness did not pay off. Sometimes I pick up an older SFF novel and I'm reminded of how much the quality bar in the field has been raised over the past twenty years. It's not entirely fair to treat The Dragon Never Sleeps as typical of 1980s science fiction: Cook is far better known for his Black Company military fantasy series, this is one of his minor novels, and it's only been intermittently in print. But I'm dubious this would have been published at all today. First, the writing is awful. It's choppy, cliched, awkward, and has no rhythm or sense of beauty. Here's a nearly random paragraph near the beginning of the book as a sample:
He hurled thunders and lightnings with renewed fury. The whole damned universe was out to frustrate him. XII Fulminata! What the hell? Was some malign force ranged against him? That was his most secret fear. That somehow someone or something was using him the way he used so many others.
(Yes, this is one of the main characters throwing a temper tantrum with a special effects machine.) In a book of 450 pages, there are 151 chapters, and most chapters switch viewpoint characters. Most of them also end with a question or some vaguely foreboding sentence to try to build tension, and while I'm willing to admit that sometimes works when used sparingly, every three pages is not sparingly. This book is also weirdly empty of description for its size. We get a few set pieces, a few battles, and a sentence or two of physical description of most characters when they're first introduced, but it's astonishing how little of a mental image I had of this universe after reading the whole book. Cook probably describes a Guardship at some point in this book, but if he does, it completely failed to stick in my memory. There are aliens that everyone recognizes as aliens, so presumably they look different than humans, but for most of them I have no idea how. Very belatedly we're told one important species (which never gets a name) has a distinctive smell. That's about it. Instead, nearly the whole book is dialogue and scheming. It's clear that Cook is intending to write a story of schemes and counter-schemes and jousting between brilliant characters. This can work if the dialogue is sufficiently sharp and snappy to carry the story. It is not.
"What mischief have you been up to, Kez Maefele?" "Staying alive in a hostile universe." "You've had more than your share of luck." "Perhaps luck had nothing to do with it, WarAvocat. Till now." "Luck has run out. The Ku Question has run its course. The symbol is about to receive its final blow."
There are hundreds of pages of this sort of thing. The setting is at least intriguing, if not stunningly original. There are immortal warships oppressing human space, mysterious Outsiders, great house politics, and an essentially immortal alien warrior who ends up carrying most of the story. That's material for a good space opera if the reader slowly learns the shape of the universe, its history, and its landmarks and political factions. Or the author can decline to explain any of that. I suppose that's also a choice. Here are some things that you may have been curious about after reading my summary, and which I'm still curious about after having finished the book: What laws do the Guardships impose and what's the philosophy behind those laws? How does the economic system work? Who built the Guardships originally, and how? How do the humans outside of Canon Space live? Who are the Ku? Why did they start fighting the humans? How many other aliens are there? What do they think of this? How does the Canon government work? How have the Guardships remained technologically superior for four thousand years? Even where the reader gets a partial explanation, such as what Web is and how it was built, it's an unimportant aside that's largely devoid of a sense of wonder. The one piece of world-building that this book is interested in is the individual Guardships and the different ways in which they've handled millennia of self-contained patrol, and even there we only get to see a few of them. There is a plot with appropriately epic scope, but even that is undermined by the odd pacing. Five, ten, or fifty years sometimes goes by in a sentence. A war starts, with apparently enormous implications for Canon Space, and then we learn that it continues for years without warranting narrative comment. This is done without transitions and without signposts for the reader; it's just another sentence in the narration, mixed in with the rhetorical questions and clumsy foreshadowing. I would like to tell you that at least the book has a satisfying ending that resolves the plot conflict that it finally reveals to the reader, but I had a hard time understanding why the ending even mattered. The plot was so difficult to follow that I'm sure I missed something, but it's not difficult to follow in the fun way that would make me want to re-read it. It's difficult to follow because Cook doesn't seem able to explain the plot in his head to the reader in any coherent form. I think the status quo was slightly disrupted? Maybe? Also, I no longer care. Oh, and there's a gene-engineered sex slave in this book, who various male characters are very protective and possessive of, who never develops much of a personality, and who has no noticeable impact on the plot despite being a major character. Yay. This was one of the worst books I've read in a long time. In retrospect, it was an awful place to start with Glen Cook. Hopefully his better-known works are also better-written, but I can't say I feel that inspired to find out. Rating: 2 out of 10

3 October 2022

Russ Allbery: Review: Jingo

Review: Jingo, by Terry Pratchett
Series: Discworld #21
Publisher: Harper
Copyright: 1997
Printing: May 2014
ISBN: 0-06-228020-1
Format: Mass market
Pages: 455
This is the 21st Discworld novel and relies on the previous Watch novels for characterization and cast development. I would not start here. In the middle of the Circle Sea, the body of water between Ankh-Morpork and the desert empire of Klatch, a territorial squabble between one fishing family from Ankh-Morpork and one from Klatch is interrupted by a weathercock rising dramatically from the sea. When the weathercock is shortly followed by the city to which it is attached and the island on which that city is resting, it's justification for more than a fishing squabble. It's a good reason for a war over new territory. The start of hostilities is an assassination attempt on a prince of Klatch. Vimes and the Watch start investigating, but politics outraces police work. Wars are a matter for the nobility and their armies, not for normal civilian leadership. Lord Vetinari resigns, leaving the city under the command of Lord Rust, who is eager for a glorious military victory against their long-term rivals. The Klatchians seem equally eager to oblige. One of the useful properties of a long series is that you build up a cast of characters you can throw at a plot, and if you can assume the reader has read enough of the previous books, you don't have to spend a lot of time on establishing characterization and can get straight to the story. Pratchett uses that here. You could read this cold, I suppose, because most of the Watch are obvious enough types that the bits of characterization they get are enough, but it works best with the nuance and layers of the previous books. Of course Colon is the most susceptible to the jingoism that prompts the book's title, and of course Angua's abilities make her the best detective. The familiar characters let Pratchett dive right in to the political machinations. Everyone plays to type here: Vetinari is deftly maneuvering everyone into place to make the situation work out the way he wants, Vimes is stubborn and ethical and needs Vetinari to push him in the right direction, and Carrot is sensible and effortlessly charismatic. Colon and Nobby are, as usual, comic relief of a sort, spending much of the book with Vetinari while not understanding what he's up to. But Nobby gets an interesting bit of characterization in the form of an extended turn as a spy that starts as cross-dressing and becomes an understated sort of gender exploration hidden behind humor that's less mocking than one might expect. Pratchett has been slowly playing more with gender in this series, and while it's simple and a bit deemphasized, I like it. I think the best part of this book, thematically, is the contrast between Carrot's and Vimes's reactions to the war. Carrot is a paragon of a certain type of ethics in Watch novels, but a war is one of the things that plays to his weaknesses. Carrot follows rules, and wars have rules of a type. You can potentially draw Carrot into them. But Vimes, despite being someone who enforces rules professionally, is deeply suspicious of them, which makes him harder to fool. Pratchett uses one of the Klatchian characters to hold a mirror up to Vimes in ways that are minor spoilers, but that I quite liked. The argument of jingoism, made by both Lord Rust and by the Klatchian prince, is that wars are something special, outside the normal rules of justice. Vimes absolutely refuses this position. As someone from the US, his reaction to Lord Rust's attempted militarization of the Watch was one of the best moments of the book.
Not a muscle moved on Rust's face. There was a clink as Vimes's badge was set neatly on the table. "I don't have to take this," Vimes said calmly. "Oh, so you'd rather be a civilian, would you?" "A watchman is a civilian, you inbred streak of pus!"
Vimes is also willing to think of a war as a possible crime, which may not be as effective as Vetinari's tricky scheming but which is very emotionally satisfying. As with most Pratchett books, the moral underpinnings of the story aren't that elaborate: people are people despite cultural differences, wars are bad, and people are too ready to believe the worst of their neighbors. The story arc is not going to provide great insights into human character that the reader did not already have. But watching Vimes stubbornly attempt to do the right thing regardless of the rule book is wholly satisfying, and watching Vetinari at work is equally, if differently, enjoyable. Not the best Discworld novel, but one of the better ones. Followed by The Last Continent in publication order, and by The Fifth Elephant thematically. Rating: 8 out of 10

30 September 2022

Utkarsh Gupta: FOSS Activites in September 2022

Here s my (thirty-sixth) monthly but brief update about the activities I ve done in the F/L/OSS world.

Debian
This was my 45th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas 19! \o/ There s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:

Debian Uploads
  • rails (2:6.1.6.1+dfsg-2) - Add patch to allow Symbols in YAML columns, fixes #1018934.
  • rails (2:6.1.6.1+dfsg-3) - Add patch to remove active_record.yaml initializers.
  • rails (2:6.1.6.1+dfsg-4) - Add patch to allow Date, Time, ActiveSupport::HashWithIndifferentAccess in YAML columns.
  • ruby-arbre (1.4.0-2) - Add patch to use selector to detect authenticity token input.
  • ruby-net-http-digest-auth (1.4.1-1) - New upstream version, v1.4.1 to fix the FTBFS w/ rails.
  • rails (2:6.1.7+dfsg-1) - New upstream version, v6.1.7+dfsg.
  • redmine (5.0.2-1) - New upstream version, v5.0.2 + fixes for #1017525, #1019607, #1019238, and #1014813.
  • redmine (5.0.2-2) - Add patch to relax pg s version for autopkgtest.
  • ruby-json-jwt (1.14.0-2) - No-change rebuild for unstable to fix #1011682.
  • libexporter-tiny-perl (1.004002-1) - New upstream version, v1.004002.

Other $things:
  • Sponsored php-nikic-fast-route/1.3.0-4~bpo11+1 for William.
  • Being an AM for Arun Kumar, process #1024.
  • Sponsoring stuff for non-DDs.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu
This was my 20th month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there s a bunch of things I do! \o/ I mostly worked on different things, I guess. I was too lazy to maintain a list of things I worked on so there s no concrete list atm. Maybe I ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D

Debian (E)LTS
Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success. And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support). This was my thirty-sixth month as a Debian LTS and twenty-seventh month as a Debian ELTS paid contributor.
I worked for 38.00 hours for LTS and 27.00 hours for ELTS.

LTS CVE Fixes and Announcements:
  • Rolled out announcement for src:flac.
  • Rolled out announcement for src:ruby-rack.
  • Issued DLA 3128-1, fixing CVE-2020-7677, for node-thenify.
    For Debian 10 buster, these problems have been fixed in version 3.3.0-1+deb10u1.
  • Issued DLA 3129-1, fixing CVE-2019-17545 and CVE-2021-45943, for gdal.
    For Debian 10 buster, these problems have been fixed in version 2.4.0+dfsg-1+deb10u1.
  • Looked at src:mbedtls which has about 18 CVEs opened in buster (including no-dsa).
    Also, spoke to the maintainer - they said they d be uncomfortable doing or reviewing the backport (although they initially said they d be happy to help).
  • Fixed src:rails regression via 2:6.1.6.1+dfsg-2, 2:6.1.6.1+dfsg-3, and 2:6.1.6.1+dfsg-4 for sid.
    CVE-2022-32224 broke the entire world. :)
  • Helped Abhijith figure out the regression fix for CVE-2022-32224.
    Also got that verified by the people who reported regression, Raphael, Sven, and Jude. The whole thread is on debian-lts@.

ELTS CVE Fixes and Announcements:
  • Rolled out announcemnet for src:ruby-tzinfo.
  • Rolled out announcemnet for src:grubt.
  • Issued ELA 682-1, fixing CVE-2022-31676, for open-vm-tools.
    For Debian 9 stretch, these problems have been fixed in version 2:10.1.5-5055683-4+deb9u3.
  • Issued ELA 691-1, fixing CVE-2020-21365, for wkhtmltopdf.
    For Debian 8 jessie, these problems have been fixed in version 0.12.1-2+deb8u1.
    For Debian 9 stretch, these problems have been fixed in version 0.12.3.2-3+deb9u1.
  • Issued ELA 692-1, fixing CVE-2022-37452, for exim4.
    For Debian 8 jessie, these problems have been fixed in version 4.84.2-2+deb8u9.
    For Debian 9 stretch, these problems have been fixed in version 4.89-2+deb9u9.
  • Started to look at src:tiff again. Has a lot of open issues. Haven t claimed the package officially yet, though. :)

Other (E)LTS Work:
  • Triaged rails, node-thenify, exim4, wkhtmltopdf, gdal, and mbedtls.
  • Marked CVE-2019-25050/gdal as not-affected for buster.
  • Marked CVE-2022-37451/exim4 as not-affected for stretch and jessie; following buster and bullseye.
  • Helped and assisted new contributors joining Freexian (LTS/ELTS).
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts) and Matrix.
  • Participated and helped fellow members with their queries via private mail and chat.
  • General and other discussions on LTS private and public mailing list.
  • Attended the monthly public meeting held on #debian-lts on September 29th.

Until next time.
:wq for today.

17 September 2022

Russ Allbery: Effective altruism and the control trap

William MacAskill has been on a book tour for What We Owe to the Future, which has put effective altruism back in the news. That plus the decision by GiveWell to remove GiveDirectly from their top charity list got me thinking about charity again. I think effective altruism, by embracing long-termism, is falling into an ethical trap, and I'm going to start heavily discounting their recommendations for donations. Background Some background first for people who have no idea what I'm talking about. Effective altruism is the idea that we should hold charities accountable for effectiveness. It's not sufficient to have an appealing mission. A charity should demonstrate that the money they spend accomplishes the goals they claimed it would. There is a lot of debate around defining "effective," but as a basic principle, this is sound. Mainstream charity evaluators such as Charity Navigator measure overhead and (arguable) waste, but they don't ask whether the on-the-ground work of the charity has a positive effect proportional to the resources it's expending. This is a good question to ask. GiveWell is a charity research organization that directs money for donors based on effective altruism principles. It's one of the central organizations in effective altruism. GiveDirectly is a charity that directly transfers money from donors to poor people. It doesn't attempt to build infrastructure, buy specific things, or fund programs. It identifies poor people and gives them cash with no strings attached. Long-termism is part of the debate over what "effectiveness" means. It says we should value impact on future generations more highly than we tend to do. (In other words, we should have a much smaller future discount rate.) A sloppy but intuitive expression of long-termism is that (hopefully) there will be far more humans living in the future than are living today, and therefore a "greatest good for the greatest number" moral philosophy argues that we should invest significant resources into making the long-term future brighter. This has obvious appeal to those of us who are concerned about the long-term impacts of climate change, for example. There is a lot of overlap between the communities of effective altruism, long-termism, and "rationalism." One way this becomes apparent is that all three communities have a tendency to obsess over the risks of sentient AI taking over the world. I'm going to come back to that. Psychology of control GiveWell, early on, discovered that GiveDirectly was measurably more effective than most charities. Giving money directly to poor people without telling them how to spend it produced more benefits for those people and their surrounding society than nearly all international aid charities. GiveDirectly then became the baseline for GiveWell's evaluations, and GiveWell started looking for ways to be more effective than that. There is some logic to thinking more effectiveness is possible. Some problems are poorly addressed by markets and too large for individual spending. Health care infrastructure is an obvious example. That said, there's also a psychological reason to look for other charities. Part of the appeal of charity is picking a cause that supports your values (whether that be raw effectiveness or something else). Your opinions and expertise are valued alongside your money. In some cases, this may be objectively true. But in all cases, it's more flattering to the ego than giving poor people cash. At that point, the argument was over how to address immediate and objectively measurable human problems. The innovation of effective altruism is to tie charitable giving to a research feedback cycle. You measure the world, see if it is improving, and adjust your funding accordingly. Impact is measured by its effects on actual people. Effective altruism was somewhat suspicious of talking directly to individuals and preferred "objective" statistical measures, but the point was to remain in contact with physical reality. Enter long-termism: what if you could get more value for your money by addressing problems that would affect vast numbers of future people, instead of the smaller number of people who happen to be alive today? Rather than looking at the merits of that argument, look at its psychology. Real people are messy. They do things you don't approve of. They have opinions that don't fit your models. They're hard to "objectively" measure. But people who haven't been born yet are much tidier. They're comfortably theoretical; instead of having to go to a strange place with unfamiliar food and languages to talk to people who aren't like you, you can think hard about future trends in the comfort of your home. You control how your theoretical future people are defined, so the results of your analysis will align with your philosophical and ideological beliefs. Problems affecting future humans are still extrapolations of problems visible today in the world, though. They're constrained by observations of real human societies, despite the layer of projection and extrapolation. We can do better: what if the most serious problem facing humanity is the possible future development of rogue AI? Here's a problem that no one can observe or measure because it's never happened. It is purely theoretical, and thus under the control of the smart philosopher or rich western donor. We don't know if a rogue AI is possible, what it would be like, how one might arise, or what we could do about it, but we can convince ourselves that all those things can be calculated with some probability bar through the power of pure logic. Now we have escaped the uncomfortable psychological tension of effective altruism and returned to the familiar world in which the rich donor can define both the problem and the solution. Effectiveness is once again what we say it is. William MacAskill, one of the originators of effective altruism, now constantly talks about the threat of rogue AI. In a way, it's quite sad. Where to give money? The mindset of long-termism is bad for the human brain. It whispers to you that you're smarter than other people, that you know what's really important, and that you should retain control of more resources because you'll spend them more wisely than others. It's the opposite of intellectual humility. A government funding agency should take some risks on theoretical solutions to real problems, and maybe a few on theoretical solutions to theoretical problems (although an order of magnitude less). I don't think this is a useful way for an individual donor to think. So, if I think effective altruism is abandoning the one good idea it had and turning back into psychological support for the egos of philosophers and rich donors, where does this leave my charitable donations? To their credit, GiveWell so far seems uninterested in shifting from concrete to theoretical problems. However, they believe they can do better by picking projects than giving people money, and they're committing to that by dropping GiveDirectly (while still praising them). They may be right. But I'm increasingly suspicious of the level of control donors want to retain. It's too easy to trick oneself into thinking you know better than the people directly affected. I have two goals when I donate money. One is to make the world a better, kinder place. The other is to redistribute wealth. I have more of something than I need, and it should go to someone who does need it. The net effect should be to make the world fairer and more equal. The first goal argues for effective altruism principles: where can I give money to have the most impact on making the world better? The second goal argues for giving across an inequality gradient. I should find the people who are struggling the most and transfer as many resources to them as I can. This is Peter Singer's classic argument for giving money to the global poor. I think one can sometimes do better than transferring money, but doing so requires a deep understanding of the infrastructure and economies of scale that are being used as leverage. The more distant one is from a society, the more dubious I think one should be of one's ability to evaluate that, and the more wary one should be of retaining any control over how resources are used. Therefore, I'm pulling my recurring donation to GiveWell. Half of it is going to go to GiveDirectly, because I think it is an effective way of redistributing wealth while giving up control. The other half is going to my local foodbank, because they have a straightforward analysis of how they can take advantage of economy of scale, and because I have more tools available (such as local news) to understand what problem they're solving and if they're doing so effectively. I don't know that those are the best choices. There are a lot of good ones. But I do feel strongly that the best charity comes from embracing the idea that I do not have special wisdom, other people know more about what they need than I do, and deploying my ego and logic from the comfort of my home is not helpful. Find someone who needs something you have an excess of. Give it to them. Treat them as equals. Don't retain control. You won't go far wrong.

James Valleroy: How I avoid sysadmin work

The server running this blog is a RockPro64 sitting in my living room. Besides WordPress (the blogging software), I run various other services on it: Most of these are for my personal use, and a few of them have pages for public viewing (linked at the top of this page). Despite running a server, I don t really consider myself to be a system administrator (or sysadmin for short). I generally try to avoid doing system administration work as much as possible. I think this is due to a number of reasons: These reasons might be surprising to some, but they also suggest that an alternative approach: So my approach is this: if I want to run an additional service, enhance an existing one, or fix a bug, I don t do those changes directly on my server. Instead, I will make (or suggest, or request) the change somewhere upstream of my server: So basically my system administration task turns into a software development task instead. And (in my opinion) there are much better tools available for this: source control systems such as git, test suites and Continuous Integration (CI) pipelines, and code review processes. These make it easier to keep track of and understand the changes, and reduce the possibility of making a catastrophic mistake. Besides this, there is one other major advantage to working upstream: the work is not just benefiting the server running in my home, but many others. Anyone who is using the same software or packages will also get the improvements or bug fixes. And likewise, I get to benefit from the work done by many other contributors. Some final notes about this approach:

Next.

Previous.